Science.gov

Sample records for supervised automated algorithm

  1. Validation of Supervised Automated Algorithm for Fast Quantitative Evaluation of Organ Motion on Magnetic Resonance Imaging

    SciTech Connect

    Prakash, Varuna; Stainsby, Jeffrey A.; Satkunasingham, Janakan; Craig, Tim; Catton, Charles; Chan, Philip; Dawson, Laura; Hensel, Jennifer; Jaffray, David; Milosevic, Michael; Nichol, Alan; Sussman, Marshall S.; Lockwood, Gina; Menard, Cynthia

    2008-07-15

    Purpose: To validate a correlation coefficient template-matching algorithm applied to the supervised automated quantification of abdominal-pelvic organ motion captured on time-resolved magnetic resonance imaging. Methods and Materials: Magnetic resonance images of 21 patients across four anatomic sites were analyzed. Representative anatomic points of interest were chosen as surrogates for organ motion. The point of interest displacements across each image frame relative to baseline were quantified manually and through the use of a template-matching software tool, termed 'Motiontrack.' Automated and manually acquired displacement measures, as well as the standard deviation of intrafraction motion, were compared for each image frame and for each patient. Results: Discrepancies between the automated and manual displacements of {>=}2 mm were uncommon, ranging in frequency of 0-9.7% (liver and prostate, respectively). The standard deviations of intrafraction motion measured with each method correlated highly (r = 0.99). Considerable interpatient variability in organ motion was demonstrated by a wide range of standard deviations in the liver (1.4-7.5 mm), uterus (1.1-8.4 mm), and prostate gland (0.8-2.7 mm). The automated algorithm performed successfully in all patients but 1 and substantially improved efficiency compared with manual quantification techniques (5 min vs. 60-90 min). Conclusion: Supervised automated quantification of organ motion captured on magnetic resonance imaging using a correlation coefficient template-matching algorithm was efficient, accurate, and may play an important role in off-line adaptive approaches to intrafraction motion management.

  2. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    PubMed Central

    Pizarro, Ricardo A.; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A.; Goldman, Aaron L.; Xiao, Ena; Luo, Qian; Berman, Karen F.; Callicott, Joseph H.; Weinberger, Daniel R.; Mattay, Venkata S.

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI. PMID:28066227

  3. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    PubMed

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  4. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  5. Automated Classification and Correlation of Drill Cores using High-Resolution Hyperspectral Images and Supervised Pattern Classification Algorithms. Applications to Paleoseismology

    NASA Astrophysics Data System (ADS)

    Ragona, D. E.; Minster, B.; Rockwell, T.; Jasso, H.

    2006-12-01

    The standard methodology to describe, classify and correlate geologic materials in the field or lab rely on physical inspection of samples, sometimes with the assistance of conventional analytical techniques (e. g. XRD, microscopy, particle size analysis). This is commonly both time-consuming and inherently subjective. Many geological materials share identical visible properties (e.g. fine grained materials, alteration minerals) and therefore cannot be mapped using the human eye alone. Recent investigations have shown that ground- based hyperspectral imaging provides an effective method to study and digitally store stratigraphic and structural data from cores or field exposures. Neural networks and Naive Bayesian classifiers supply a variety of well-established techniques towards pattern recognition, especially for data examples with high- dimensionality input-outputs. In this poster, we present a new methodology for automatic mapping of sedimentary stratigraphy in the lab (drill cores, samples) or the field (outcrops, exposures) using short wave infrared (SWIR) hyperspectral images and these two supervised classification algorithms. High-spatial/spectral resolution data from large sediment samples (drill cores) from a paleoseismic excavation site were collected using a portable hyperspectral scanner with 245 continuous channels measured across the 960 to 2404 nm spectral range. The data were corrected for geometric and radiometric distortions and pre-processed to obtain reflectance at each pixel of the images. We built an example set using hundreds of reflectance spectra collected from the sediment core images. The examples were grouped into eight classes corresponding to materials found in the samples. We constructed two additional example sets by computing the 2-norm normalization, the derivative of the smoothed original reflectance examples. Each example set was divided into four subsets: training, training test, verification and validation. A multi

  6. Random forest automated supervised classification of Hipparcos periodic variable stars

    NASA Astrophysics Data System (ADS)

    Dubath, P.; Rimoldini, L.; Süveges, M.; Blomme, J.; López, M.; Sarro, L. M.; De Ridder, J.; Cuypers, J.; Guy, L.; Lecoeur, I.; Nienartowicz, K.; Jan, A.; Beck, M.; Mowlavi, N.; De Cat, P.; Lebzelter, T.; Eyer, L.

    2011-07-01

    We present an evaluation of the performance of an automated classification of the Hipparcos periodic variable stars into 26 types. The sub-sample with the most reliable variability types available in the literature is used to train supervised algorithms to characterize the type dependencies on a number of attributes. The most useful attributes evaluated with the random forest methodology include, in decreasing order of importance, the period, the amplitude, the V-I colour index, the absolute magnitude, the residual around the folded light-curve model, the magnitude distribution skewness and the amplitude of the second harmonic of the Fourier series model relative to that of the fundamental frequency. Random forests and a multi-stage scheme involving Bayesian network and Gaussian mixture methods lead to statistically equivalent results. In standard 10-fold cross-validation (CV) experiments, the rate of correct classification is between 90 and 100 per cent, depending on the variability type. The main mis-classification cases, up to a rate of about 10 per cent, arise due to confusion between SPB and ACV blue variables and between eclipsing binaries, ellipsoidal variables and other variability types. Our training set and the predicted types for the other Hipparcos periodic stars are available online.

  7. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  8. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  9. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  10. Automated training for algorithms that learn from genomic data.

    PubMed

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.

  11. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms

    PubMed Central

    Zwartjes, Ardjan; Havinga, Paul J. M.; Smit, Gerard J. M.; Hurink, Johann L.

    2016-01-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution. PMID:27706071

  12. Supervised and unsupervised discretization methods for evolutionary algorithms

    SciTech Connect

    Cantu-Paz, E

    2001-01-24

    This paper introduces simple model-building evolutionary algorithms (EAs) that operate on continuous domains. The algorithms are based on supervised and unsupervised discretization methods that have been used as preprocessing steps in machine learning. The basic idea is to discretize the continuous variables and use the discretization as a simple model of the solutions under consideration. The model is then used to generate new solutions directly, instead of using the usual operators based on sexual recombination and mutation. The algorithms presented here have fewer parameters than traditional and other model-building EAs. They expect that the proposed algorithms that use multivariate models scale up better to the dimensionality of the problem than existing EAs.

  13. A novel supervised trajectory segmentation algorithm identifies distinct types of human adenovirus motion in host cells.

    PubMed

    Helmuth, Jo A; Burckhardt, Christoph J; Koumoutsakos, Petros; Greber, Urs F; Sbalzarini, Ivo F

    2007-09-01

    Biological trajectories can be characterized by transient patterns that may provide insight into the interactions of the moving object with its immediate environment. The accurate and automated identification of trajectory motifs is important for the understanding of the underlying mechanisms. In this work, we develop a novel trajectory segmentation algorithm based on supervised support vector classification. The algorithm is validated on synthetic data and applied to the identification of trajectory fingerprints of fluorescently tagged human adenovirus particles in live cells. In virus trajectories on the cell surface, periods of confined motion, slow drift, and fast drift are efficiently detected. Additionally, directed motion is found for viruses in the cytoplasm. The algorithm enables the linking of microscopic observations to molecular phenomena that are critical in many biological processes, including infectious pathogen entry and signal transduction.

  14. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    PubMed

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  15. Segmentation of retinal blood vessels using a novel clustering algorithm (RACAL) with a partial supervision strategy.

    PubMed

    Salem, Sameh A; Salem, Nancy M; Nandi, Asoke K

    2007-03-01

    In this paper, segmentation of blood vessels from colour retinal images using a novel clustering algorithm with a partial supervision strategy is proposed. The proposed clustering algorithm, which is a RAdius based Clustering ALgorithm (RACAL), uses a distance based principle to map the distributions of the data by utilising the premise that clusters are determined by a distance parameter, without having to specify the number of clusters. Additionally, the proposed clustering algorithm is enhanced with a partial supervision strategy and it is demonstrated that it is able to segment blood vessels of small diameters and low contrasts. Results are compared with those from the KNN classifier and show that the proposed RACAL performs better than the KNN in case of abnormal images as it succeeds in segmenting small and low contrast blood vessels, while it achieves comparable results for normal images. For automation process, RACAL can be used as a classifier and results show that it performs better than the KNN classifier in both normal and abnormal images.

  16. Automated extraction of the cortical sulci based on a supervised learning approach.

    PubMed

    Tu, Zhuowen; Zheng, Songfeng; Yuille, Alan L; Reiss, Allan L; Dutton, Rebecca A; Lee, Agatha D; Galaburda, Albert M; Dinov, Ivo; Thompson, Paul M; Toga, Arthur W

    2007-04-01

    It is important to detect and extract the major cortical sulci from brain images, but manually annotating these sulci is a time-consuming task and requires the labeler to follow complex protocols. This paper proposes a learning-based algorithm for automated extraction of the major cortical sulci from magnetic resonance imaging (MRI) volumes and cortical surfaces. Unlike alternative methods for detecting the major cortical sulci, which use a small number of predefined rules based on properties of the cortical surface such as the mean curvature, our approach learns a discriminative model using the probabilistic boosting tree algorithm (PBT). PBT is a supervised learning approach which selects and combines hundreds of features at different scales, such as curvatures, gradients and shape index. Our method can be applied to either MRI volumes or cortical surfaces. It first outputs a probability map which indicates how likely each voxel lies on a major sulcal curve. Next, it applies dynamic programming to extract the best curve based on the probability map and a shape prior. The algorithm has almost no parameters to tune for extracting different major sulci. It is very fast (it runs in under 1 min per sulcus including the time to compute the discriminative models) due to efficient implementation of the features (e.g., using the integral volume to rapidly compute the responses of 3-D Haar filters). Because the algorithm can be applied to MRI volumes directly, there is no need to perform preprocessing such as tissue segmentation or mapping to a canonical space. The learning aspect of our approach makes the system very flexible and general. For illustration, we use volumes of the right hemisphere with several major cortical sulci manually labeled. The algorithm is tested on two groups of data, including some brains from patients with Williams Syndrome, and the results are very encouraging.

  17. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  18. Algorithms for Automated DNA Assembly

    DTIC Science & Technology

    2010-01-01

    polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with

  19. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  20. Algorithms to Automate LCLS Undulator Tuning

    SciTech Connect

    Wolf, Zachary

    2010-12-03

    Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.

  1. A numeric comparison of variable selection algorithms for supervised learning

    NASA Astrophysics Data System (ADS)

    Palombo, G.; Narsky, I.

    2009-12-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( http://sourceforge.net/projects/statpatrec/). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ("Add N Remove R") implemented in SPR.

  2. Experiments on Supervised Learning Algorithms for Text Categorization

    NASA Technical Reports Server (NTRS)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  3. Validation of automated supervised segmentation of multibeam backscatter data from the Chatham Rise, New Zealand

    NASA Astrophysics Data System (ADS)

    Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens

    2017-01-01

    Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.

  4. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    SciTech Connect

    Carter, Joshua A.; Agol, Eric

    2013-03-10

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance-{sup s}mearing{sup -}as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  5. Value Focused Thinking Applications to Supervised Pattern Classification With Extensions to Hyperspectral Anomaly Detection Algorithms

    DTIC Science & Technology

    2015-03-26

    HYPERSPECTRAL ANOMALY DETECTION ALGORITHMS THESIS MARCH 2015 David E. Scanland, Captain, USAF AFIT-ENS-MS-15-M-121 DEPARTMENT OF THE AIR FORCE...PATTERN CLASSIFICATION WITH EXTENSIONS TO HYPERSPECTRAL ANOMALY DETECTION ALGORITHMS THESIS Presented to the Faculty Department of...APPLICATION TO SUPERVISED PATTERN CLASSIFICATION WITH EXTENSIONS TO HYPERSPECTRAL ANOMALY DETECTION ALGORITHMS David E. Scanland, MS Captain, USAF

  6. Semi-supervised clustering algorithm for haplotype assembly problem based on MEC model.

    PubMed

    Xu, Xin-Shun; Li, Ying-Xin

    2012-01-01

    Haplotype assembly is to infer a pair of haplotypes from localized polymorphism data. In this paper, a semi-supervised clustering algorithm-SSK (semi-supervised K-means) is proposed for it, which, to our knowledge, is the first semi-supervised clustering method for it. In SSK, some positive information is firstly extracted. The information is then used to help k-means to cluster all SNP fragments into two sets from which two haplotypes can be reconstructed. The performance of SSK is tested on both real data and simulated data. The results show that it outperforms several state-of-the-art algorithms on minimum error correction (MEC) model.

  7. ALFA: an automated line fitting algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2016-03-01

    I present the automated line fitting algorithm, ALFA, a new code which can fit emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. In contrast to traditional emission line fitting methods which require the identification of spectral features suspected to be emission lines, ALFA instead uses a list of lines which are expected to be present to construct a synthetic spectrum. The parameters used to construct the synthetic spectrum are optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. I show that the results are in excellent agreement with those measured manually for a number of spectra. Where discrepancies exist, the manually measured fluxes are found to be less accurate than those returned by ALFA. Together with the code NEAT, ALFA provides a powerful way to rapidly extract physical information from observations, an increasingly vital function in the era of highly multiplexed spectroscopy. The two codes can deliver a reliable and comprehensive analysis of very large data sets in a few hours with little or no user interaction.

  8. A semi-supervised classification algorithm using the TAD-derived background as training data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  9. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    NASA Astrophysics Data System (ADS)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  10. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  11. Adaptive Automation for Human Supervision of Multiple Uninhabited Vehicles: Effects on Change Detection, Situation Awareness, and Mental Workload

    DTIC Science & Technology

    2009-01-01

    http://www.informaworld.com/smpp/title~content=t775653681 Adaptive Automation for Human Supervision of Multiple Uninhabited Vehicles: Effects on Change...Uninhabited Vehicles: Effects on Change Detection, Situation Awareness, and Mental Workload’,Military Psychology,21:2,270 — 297 To link to this...Supervision of Multiple Uninhabited Vehicles: Effects on Change Detection, Situation Awareness, and Mental Workload 5a. CONTRACT NUMBER 5b. GRANT

  12. Automated morphological classification of galaxies based on projection gradient nonnegative matrix factorization algorithm

    NASA Astrophysics Data System (ADS)

    Selim, I. M.; Abd El Aziz, Mohamed

    2017-02-01

    The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.

  13. Automated morphological classification of galaxies based on projection gradient nonnegative matrix factorization algorithm

    NASA Astrophysics Data System (ADS)

    Selim, I. M.; Abd El Aziz, Mohamed

    2017-04-01

    The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.

  14. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  15. Derivation of a Novel Efficient Supervised Learning Algorithm from Cortical-Subcortical Loops

    PubMed Central

    Chandrashekar, Ashok; Granger, Richard

    2012-01-01

    Although brain circuits presumably carry out powerful perceptual algorithms, few instances of derived biological methods have been found to compete favorably against algorithms that have been engineered for specific applications. We forward a novel analysis of a subset of functions of cortical–subcortical loops, which constitute more than 80% of the human brain, thus likely underlying a broad range of cognitive functions. We describe a family of operations performed by the derived method, including a non-standard method for supervised classification, which may underlie some forms of cortically dependent associative learning. The novel supervised classifier is compared against widely used algorithms for classification, including support vector machines (SVM) and k-nearest neighbor methods, achieving corresponding classification rates – at a fraction of the time and space costs. This represents an instance of a biologically derived algorithm comparing favorably against widely used machine learning methods on well-studied tasks. PMID:22291632

  16. Ordering and finding the best of K > 2 supervised learning algorithms.

    PubMed

    Yildiz, Olcay Taner; Alpaydin, Ethem

    2006-03-01

    Given a data set and a number of supervised learning algorithms, we would like to find the algorithm with the smallest expected error. Existing pairwise tests allow a comparison of two algorithms only; range tests and ANOVA check whether multiple algorithms have the same expected error and cannot be used for finding the smallest. We propose a methodology, the MultiTest algorithm, whereby we order supervised learning algorithms taking into account 1) the result of pairwise statistical tests on expected error (what the data tells us), and 2) our prior preferences, e.g., due to complexity. We define the problem in graph-theoretic terms and propose an algorithm to find the "best" learning algorithm in terms of these two criteria, or in the more general case, order learning algorithms in terms of their "goodness." Simulation results using five classification algorithms on 30 data sets indicate the utility of the method. Our proposed method can be generalized to regression and other loss functions by using a suitable pairwise test.

  17. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    SciTech Connect

    Adal, Kedir M.; Sidebe, Desire; Ali, Sharib; Chaum, Edward; Karnowski, Thomas Paul; Meriaudeau, Fabrice

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.

  18. Agent-Based Automated Algorithm Generator

    DTIC Science & Technology

    2010-01-12

    Detection and Isolation Agent (FDIA), Prognostic Agent (PA), Fusion Agent (FA), and Maintenance Mining Agent (MMA). FDI agents perform diagnostics...manner and loosely coupled). The library of D/P algorithms will be hosted in server-side agents, consisting of four types of major agents: Fault

  19. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  20. Comparative Study of Algorithms for Automated Generalization of Linear Objects

    NASA Astrophysics Data System (ADS)

    Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.

    2014-11-01

    Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network

  1. The Marshall Automated Wind Algorithm for Geostationary Satellite Wind Applications

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Atkinson, Robert J.

    1998-01-01

    The Marshall Automated Wind (MAW) algorithm was developed over a decade ago in support of specialized studies of mesoscale meteorology. In recent years, the algorithm has been generalized to address global climate issues and other specific objectives related to NASA missions. The MAW algorithm uses a tracking scheme which minimizes image brightness temperature differences in a sequence of satellite images to determine feature displacement (winds). With the appropriate methodology accurate satellite derived winds can be obtained from visible, infrared, and water vapor imagery. Typical errors are less than 4 m/s but depend on the quality and control constraints used in post-processing. Key to this success is the judicious use of template size and search area used for tracking, image resolution and time sampling, and selection of appropriate statistical constraints which may vary with image type and desired application. The conference paper and subsequent poster will provide details of the technique and examples of its application.

  2. A recommendation algorithm for automating corollary order generation.

    PubMed

    Klann, Jeffrey; Schadow, Gunther; McCoy, J M

    2009-11-14

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards.

  3. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  4. Enhancing time-series detection algorithms for automated biosurveillance.

    PubMed

    Tokars, Jerome I; Burkom, Howard; Xing, Jian; English, Roseanne; Bloom, Steven; Cox, Kenneth; Pavlin, Julie A

    2009-04-01

    BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14-28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data.

  5. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies

    PubMed Central

    Abdi, Asad; Idris, Norisma; Alguliyev, Rasim M.; Aliguliyev, Ramiz M.

    2016-01-01

    Background Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively. Design/Results This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing. PMID:26735139

  6. Optimization of supervised self-organizing maps with genetic algorithms for classification of urinary calculi

    NASA Astrophysics Data System (ADS)

    Kuzmanovski, Igor; Trpkovska, Mira; Šoptrajanov, Bojan

    2005-06-01

    Supervised self-organizing maps were used for classification of 160 infrared spectra of urinary calculi composed of calcium oxalates (whewellite and weddellite), pure or in binary or ternary mixtures with carbonate apatite, struvite or uric acid. The study was focused to such calculi since more than 80% of the samples analyzed contained some or all of the above-mentioned constituents. The classification was done on the basis of the infrared spectra in the 1450-450 cm -1 region. Two procedures were used in order to find the most suitable size and for optimizing the self-organizing map of which that using the genetic algorithms gave better results. Using this procedure several sets of solutions with zero misclassifications were obtained. Thus, the self-organizing maps may be considered as a promising tool for qualitative analysis of urinary calculi.

  7. Integrating GIS and genetic algorithms for automating land partitioning

    NASA Astrophysics Data System (ADS)

    Demetriou, Demetris; See, Linda; Stillwell, John

    2014-08-01

    Land consolidation is considered to be the most effective land management planning approach for controlling land fragmentation and hence improving agricultural efficiency. Land partitioning is a basic process of land consolidation that involves the subdivision of land into smaller sub-spaces subject to a number of constraints. This paper explains the development of a module called LandParcelS (Land Parcelling System) that integrates geographical information systems and a genetic algorithm to automate the land partitioning process by designing and optimising land parcels in terms of their shape, size and value. This new module has been applied to two land blocks that are part of a larger case study area in Cyprus. Partitioning is carried out by guiding a Thiessen polygon process within ArcGIS and it is treated as a multiobjective problem. The results suggest that a step forward has been made in solving this complex spatial problem, although further research is needed to improve the algorithm. The contribution of this research extends land partitioning and space partitioning in general, since these approaches may have relevance to other spatial processes that involve single or multi-objective problems that could be solved in the future by spatial evolutionary algorithms.

  8. Appropriate training area selection for supervised texture classification by using the genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Maeda, Masaru; Arai, Kohei

    2003-03-01

    A new method for selection of appropriate training areas which are used for supervised texture classification is proposed. In the method, the genetic algorithms (GA) are employed to determine the appropriate location and the appropriate size of each texture category's training area. The proposed method consists of the following procedures: 1) the determination of the number of classification category and those kinds; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. Some experiments are conducted to evaluate searching capability of appropriate training areas of the proposed method by using images from Brodatz's photo album and their rotated images. The experimental results show that the proposed method can select appropriate training areas much faster than conventional try-and-error method. The proposed method has been also applied to supervised texture classification of airborne multispectral scanner images. The experimental results show that the proposed method can provide appropriate training areas for reasonable classification results.

  9. Automated classification of female facial beauty by image analysis and supervised learning

    NASA Astrophysics Data System (ADS)

    Gunes, Hatice; Piccardi, Massimo; Jan, Tony

    2004-01-01

    The fact that perception of facial beauty may be a universal concept has long been debated amongst psychologists and anthropologists. In this paper, we performed experiments to evaluate the extent of beauty universality by asking a number of diverse human referees to grade a same collection of female facial images. Results obtained show that the different individuals gave similar votes, thus well supporting the concept of beauty universality. We then trained an automated classifier using the human votes as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry and plastic surgery.

  10. Generation of a supervised classification algorithm for time-series variable stars with an application to the LINEAR dataset

    NASA Astrophysics Data System (ADS)

    Johnston, K. B.; Oluseyi, H. M.

    2017-04-01

    With the advent of digital astronomy, new benefits and new problems have been presented to the modern day astronomer. While data can be captured in a more efficient and accurate manner using digital means, the efficiency of data retrieval has led to an overload of scientific data for processing and storage. This paper will focus on the construction and application of a supervised pattern classification algorithm for the identification of variable stars. Given the reduction of a survey of stars into a standard feature space, the problem of using prior patterns to identify new observed patterns can be reduced to time-tested classification methodologies and algorithms. Such supervised methods, so called because the user trains the algorithms prior to application using patterns with known classes or labels, provide a means to probabilistically determine the estimated class type of new observations. This paper will demonstrate the construction and application of a supervised classification algorithm on variable star data. The classifier is applied to a set of 192,744 LINEAR data points. Of the original samples, 34,451 unique stars were classified with high confidence (high level of probability of being the true class).

  11. How to measure metallicity from five-band photometry with supervised machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Acquaviva, Viviana

    2016-02-01

    We demonstrate that it is possible to measure metallicity from the SDSS five-band photometry to better than 0.1 dex using supervised machine learning algorithms. Using spectroscopic estimates of metallicity as ground truth, we build, optimize and train several estimators to predict metallicity. We use the observed photometry, as well as derived quantities such as stellar mass and photometric redshift, as features, and we build two sample data sets at median redshifts of 0.103 and 0.218 and median r-band magnitude of 17.5 and 18.3, respectively. We find that ensemble methods, such as random forests of trees and extremely randomized trees and support vector machines all perform comparably well and can measure metallicity with a Root Mean Square Error (RMSE) of 0.081 and 0.090 for the two data sets when all objects are included. The fraction of outliers (objects for which |Ztrue - Zpred| > 0.2 dex) is 2.2 and 3.9 per cent, respectively and the RMSE decreases to 0.068 and 0.069 if those objects are excluded. Because of the ability of these algorithms to capture complex relationships between data and target, our technique performs better than previously proposed methods that sought to fit metallicity using an analytic fitting formula, and has 3× more constraining power than SED fitting-based methods. Additionally, this method is extremely forgiving of contamination in the training set, and can be used with very satisfactory results for sample sizes of a few hundred objects. We distribute all the routines to reproduce our results and apply them to other data sets.

  12. Visualizing Global Wildfire Automated Biomass Burning Algorithm Data

    NASA Astrophysics Data System (ADS)

    Schmidt, C. C.; Hoffman, J.; Prins, E. M.

    2013-12-01

    The Wildfire Automated Biomass Burning Algorithm (WFABBA) produces fire detection and characterization from a global constellation of geostationary satellites on a realtime basis. Presentation of this data in a timely and meaningful way has been a challenge, but as hardware and software have advanced and web tools have evolved, new options have rapidly arisen. The WFABBA team at the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the Space Science Engineering Center (SSEC) have begun implementation of a web-based framework that allows a user to visualize current and archived fire data from NOAA's Geostationary Operational Environmental Satellite (GOES), EUMETSAT's Meteosat Second Generation (MSG), JMA's Multifunction Transport Satellite (MTSAT), and KMA's COMS series of satellites. User group needs vary from simple examination of the most recent data to multi-hour composites to animations, as well as saving datasets for further review. In order to maximize the usefulness of the data, a user-friendly and scaleable interface has been under development that will, when complete, allow access to approximately 18 years of WFABBA data, as well as the data produced in real-time. Implemented, planned, and potential additional features will be examined.

  13. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

  14. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters

  15. Automated cell analysis tool for a genome-wide RNAi screen with support vector machine based supervised learning

    NASA Astrophysics Data System (ADS)

    Remmele, Steffen; Ritzerfeld, Julia; Nickel, Walter; Hesser, Jürgen

    2011-03-01

    RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.

  16. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  17. Effects of automation and task load on task switching during human supervision of multiple semi-autonomous robots in a dynamic environment.

    PubMed

    Squire, P N; Parasuraman, R

    2010-08-01

    The present study assessed the impact of task load and level of automation (LOA) on task switching in participants supervising a team of four or eight semi-autonomous robots in a simulated 'capture the flag' game. Participants were faster to perform the same task than when they chose to switch between different task actions. They also took longer to switch between different tasks when supervising the robots at a high compared to a low LOA. Task load, as manipulated by the number of robots to be supervised, did not influence switch costs. The results suggest that the design of future unmanned vehicle (UV) systems should take into account not simply how many UVs an operator can supervise, but also the impact of LOA and task operations on task switching during supervision of multiple UVs. The findings of this study are relevant for the ergonomics practice of UV systems. This research extends the cognitive theory of task switching to inform the design of UV systems and results show that switching between UVs is an important factor to consider.

  18. An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.

    2003-01-01

    An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.

  19. Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments

    SciTech Connect

    Alireza Haghighat

    2003-05-07

    This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.

  20. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  1. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  2. Automated maneuver planning using a fuzzy logic algorithm

    NASA Technical Reports Server (NTRS)

    Conway, D.; Sperling, R.; Folta, D.; Richon, K.; Defazio, R.

    1994-01-01

    Spacecraft orbital control requires intensive interaction between the analyst and the system used to model the spacecraft trajectory. For orbits with right mission constraints and a large number of maneuvers, this interaction is difficult or expensive to accomplish in a timely manner. Some automation of maneuver planning can reduce these difficulties for maneuver-intensive missions. One approach to this automation is to use fuzzy logic in the control mechanism. Such a prototype system currently under development is discussed. The Tropical Rainfall Measurement Mission (TRMM) is one of several missions that could benefit from automated maneuver planning. TRMM is scheduled for launch in August 1997. The spacecraft is to be maintained in a 350-km circular orbit throughout the 3-year lifetime of the mission, with very small variations in this orbit allowed. Since solar maximum will occur as early as 1999, the solar activity during the TRMM mission will be increasing. The increasing solar activity will result in orbital maneuvers being performed as often as every other day. The results of automated maneuver planning for the TRMM mission will be presented to demonstrate the prototype of the fuzzy logic tool.

  3. Progress on the development of automated data analysis algorithms and software for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Coughlin, Chris; Forsyth, David S.; Welter, John T.

    2014-02-01

    Progress is presented on the development and implementation of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. ADA processing results are presented for test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions.

  4. Development of an algorithm for automated enhancement of digital prototypes in machine engineering

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Dzhioev, G. A.

    2017-02-01

    The paper deals with the problem of processing digital prototypes in machine engineering with the use of modern approaches to computer vision, methods of taxonomy (a section of the decision theory), automation of manual retouching techniques. Upon further study of the problem, different taxonomic methods have been considered, among which the reference method has been chosen as the most appropriate for automated search of defective areas of the prototype. As a result, the algorithm for automated enhancement of digital prototypes of the digital image has been developed, using modern information technologies.

  5. Non-Algorithmic Issues in Automated Computational Mechanics

    DTIC Science & Technology

    1991-04-30

    Discretization of the Model ............................. 16 * 3.4 Selection of Computational Methods and Strategies .............. 16 3.5 Numerical Analysis of...90 8.3 Automated Strategy Selection and Performance Monitoring ........... 92 8.3.1 Selection of Computational Methods .................... 92 8.3.2...Knowledge Bases for Coupled PHLEX-NEXPERT Environ- ment 160 0.1 Strategy Selection Knowledge Base.......................... 160 0.2 Performance Control

  6. Predicting pupylation sites in prokaryotic proteins using semi-supervised self-training support vector machine algorithm.

    PubMed

    Ju, Zhe; Gu, Hong

    2016-08-15

    As one important post-translational modification of prokaryotic proteins, pupylation plays a key role in regulating various biological processes. The accurate identification of pupylation sites is crucial for understanding the underlying mechanisms of pupylation. Although several computational methods have been developed for the identification of pupylation sites, the prediction accuracy of them is still unsatisfactory. Here, a novel bioinformatics tool named IMP-PUP is proposed to improve the prediction of pupylation sites. IMP-PUP is constructed on the composition of k-spaced amino acid pairs and trained with a modified semi-supervised self-training support vector machine (SVM) algorithm. The proposed algorithm iteratively trains a series of support vector machine classifiers on both annotated and non-annotated pupylated proteins. Computational results show that IMP-PUP achieves the area under receiver operating characteristic curves of 0.91, 0.73, and 0.75 on our training set, Tung's testing set, and our testing set, respectively, which are better than those of the different error costs SVM algorithm and the original self-training SVM algorithm. Independent tests also show that IMP-PUP significantly outperforms three other existing pupylation site predictors: GPS-PUP, iPUP, and pbPUP. Therefore, IMP-PUP can be a useful tool for accurate prediction of pupylation sites. A MATLAB software package for IMP-PUP is available at https://juzhe1120.github.io/.

  7. Normalized Cut Algorithm for Automated Assignment of Protein Domains

    NASA Technical Reports Server (NTRS)

    Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.

  8. An automated algorithm for the generation of dynamically reconstructed trajectories

    NASA Astrophysics Data System (ADS)

    Komalapriya, C.; Romano, M. C.; Thiel, M.; Marwan, N.; Kurths, J.; Kiss, I. Z.; Hudson, J. L.

    2010-03-01

    The lack of long enough data sets is a major problem in the study of many real world systems. As it has been recently shown [C. Komalapriya, M. Thiel, M. C. Romano, N. Marwan, U. Schwarz, and J. Kurths, Phys. Rev. E 78, 066217 (2008)], this problem can be overcome in the case of ergodic systems if an ensemble of short trajectories is available, from which dynamically reconstructed trajectories can be generated. However, this method has some disadvantages which hinder its applicability, such as the need for estimation of optimal parameters. Here, we propose a substantially improved algorithm that overcomes the problems encountered by the former one, allowing its automatic application. Furthermore, we show that the new algorithm not only reproduces the short term but also the long term dynamics of the system under study, in contrast to the former algorithm. To exemplify the potential of the new algorithm, we apply it to experimental data from electrochemical oscillators and also to analyze the well-known problem of transient chaotic trajectories.

  9. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  10. Algorithm for Automated Detection of Edges of Clouds

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.

    2006-01-01

    An algorithm processes cloud-physics data gathered in situ by an aircraft, along with reflectivity data gathered by ground-based radar, to determine whether the aircraft is inside or outside a cloud at a given time. A cloud edge is deemed to be detected when the in/out state changes, subject to a hysteresis constraint. Such determinations are important in continuing research on relationships among lightning, electric charges in clouds, and decay of electric fields with distance from cloud edges.

  11. An algorithm for automated identification of fault zone trapped waves

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.

    2015-08-01

    We develop an algorithm for automatic identification of fault zone trapped waves in data recorded by seismic fault zone arrays. Automatic S picks are used to identify time windows in the seismograms for subsequent search for trapped waves. The algorithm calculates five features in each seismogram recorded by each station: predominant period, 1 s duration energy (representative of trapped waves), relative peak strength, arrival delay and 6 s duration energy (representative of the entire seismogram). These features are used collectively to identify stations in the array with seismograms that are statistical outliers. Applying the algorithm to large data sets allows for distinguishing genuine trapped waves from occasional localized site amplification in seismograms of other stations. The method is verified on a test data set recorded across the rupture zone of the 1992 Landers earthquake, for which trapped waves were previously identified manually, and is then applied to a larger data set with several thousand events recorded across the San Jacinto fault zone. The developed technique provides an important tool for systematic objective processing of large seismic waveform data sets recorded near fault zones.

  12. Evaluation of algorithms for automated phase correction of NMR spectra.

    PubMed

    de Brouwer, Hans

    2009-12-01

    In our attempt to fully automate the data acquisition and processing of NMR analysis of dissolved synthetic polymers, phase correction was found to be the most challenging aspect. Several approaches in literature were evaluated but none of these was found to be capable of phasing NMR spectra with sufficient robustness and high enough accuracy to fully eliminate intervention by a human operator. Step by step, aspects from the process of manual/visual phase correction were translated into mathematical concepts and evaluated. This included area minimization, peak height maximization, negative peak minimization and baseline correction. It was found that not one single approach would lead to acceptable results but that a combination of aspects was required, in line again with the process of manual phase correction. The combination of baseline correction, area minimization and negative area penalization was found to give the desired results. The robustness was found to be 100% which means that the correct zeroth order and first order phasing parameters are returned independent of the position of the starting point of the search in this parameter space. When applied to high signal-to-noise proton spectra, the accuracy was such that the returned phasing parameters were within a distance of 0.1-0.4 degrees in the two dimensional parameter space which resulted in an average error of 0.1% in calculated properties such as copolymer composition and end groups.

  13. Automated mineral identification algorithm using optical properties of crystals

    NASA Astrophysics Data System (ADS)

    Aligholi, Saeed; Khajavi, Reza; Razmara, Morteza

    2015-12-01

    A method has been developed to automatically characterize the type of mineral phases by means of digital image analysis using optical properties of crystals. The method relies on microscope automation, digital image acquisition, image processing and analysis. Two hundred series of digital images were taken from 45 standard thin sections using a digital camera mounted on a conventional microscope and then transmitted to a computer. CIELab color space is selected for the processing, in order to effectively employ its well-defined color difference metric for introducing appropriate color-based feature. Seven basic optical properties of minerals (A. color; B. pleochroism; C. interference color; D. birefringence; E. opacity; F. isotropy; G. extinction angle) are redefined. The Local Binary Pattern (LBP) operator and modeling texture is integrated in the Mineral Identification (MI) scheme to identify homogeneous regions in microscopic images of minerals. The accuracy of mineral identification using the method was %99, %98, %96 and %95 for biotite, hornblende, quartz and calcite minerals, respectively. The method is applicable to other minerals and phases for which individual optical properties of crystals do not provide enough discrimination between the relevant phases. On the basis of this research, it can be concluded that if the CIELab color space and the local binary pattern (LBP) are applied, it is possible to recognize the mineral samples with the accuracy of more than 98%.

  14. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  15. Novel Approaches for Diagnosing Melanoma Skin Lesions Through Supervised and Deep Learning Algorithms.

    PubMed

    Premaladha, J; Ravichandran, K S

    2016-04-01

    Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies.

  16. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    NASA Astrophysics Data System (ADS)

    Karagiannis, Georgios; Antón Castro, Francesc; Mioc, Darka

    2016-06-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detected are invariant to image rotations, translations, scaling and also to changes in illumination, brightness and 3-dimensional viewpoint. Afterwards, each feature of the reference image is matched with one in the sensed image if, and only if, the distance between them multiplied by a threshold is shorter than the distances between the point and all the other points in the sensed image. Then, the matched features are used to compute the parameters of the homography that transforms the coordinate system of the sensed image to the coordinate system of the reference image. The Delaunay triangulations of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches.

  17. A Semi-supervised Heat Kernel Pagerank MBO Algorithm for Data Classification

    DTIC Science & Technology

    2016-07-01

    computation of a different pagerank for every node and [70] involves solving a very large matrix system . We now present a simple, efficient and accurate...20], the authors descibe an algo- rithm solving linear systems with boundary conditions using heat kernel pagerank. The method in [21] is another...local clustering algorithm, which uses a novel way of comput- ing the pagerank very efficiently . An interesting application to heat kernel pagerank is

  18. Image-derived input function derived from a supervised clustering algorithm: methodology and validation in a clinical protocol using [11C](R)-rolipram.

    PubMed

    Lyoo, Chul Hyoung; Zanotti-Fregonara, Paolo; Zoghbi, Sami S; Liow, Jeih-San; Xu, Rong; Pike, Victor W; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B

    2014-01-01

    Image-derived input function (IDIF) obtained by manually drawing carotid arteries (manual-IDIF) can be reliably used in [(11)C](R)-rolipram positron emission tomography (PET) scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA). To validate this technique, 25 healthy controls and 26 patients with moderate to severe major depressive disorder (MDD) underwent T1-weighted brain magnetic resonance imaging (MRI) and a 90-minute [(11)C](R)-rolipram PET scan. For each subject, metabolite-corrected input function was measured from the radial artery. SVCA templates were obtained from 10 additional healthy subjects who underwent the same MRI and PET procedures. Cluster-IDIF was obtained as follows: 1) template mask images were created for carotid and surrounding tissue; 2) parametric image of weights for blood were created using SVCA; 3) mask images to the individual PET image were inversely normalized; 4) carotid and surrounding tissue time activity curves (TACs) were obtained from weighted and unweighted averages of each voxel activity in each mask, respectively; 5) partial volume effects and radiometabolites were corrected using individual arterial data at four points. Logan-distribution volume (V T/f P) values obtained by cluster-IDIF were similar to reference results obtained using arterial data, as well as those obtained using manual-IDIF; 39 of 51 subjects had a V T/f P error of <5%, and only one had error >10%. With automatic voxel selection, cluster-IDIF curves were less noisy than manual-IDIF and free of operator-related variability. Cluster-IDIF showed widespread decrease of about 20% [(11)C](R)-rolipram binding in the MDD group. Taken together, the results suggest that cluster-IDIF is a good alternative to full arterial input function for estimating Logan-V T/f P in [(11)C

  19. Image-Derived Input Function Derived from a Supervised Clustering Algorithm: Methodology and Validation in a Clinical Protocol Using [11C](R)-Rolipram

    PubMed Central

    Zoghbi, Sami S.; Liow, Jeih-San; Xu, Rong; Pike, Victor W.; Zarate, Carlos A.; Fujita, Masahiro; Innis, Robert B.

    2014-01-01

    Image-derived input function (IDIF) obtained by manually drawing carotid arteries (manual-IDIF) can be reliably used in [11C](R)-rolipram positron emission tomography (PET) scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA). To validate this technique, 25 healthy controls and 26 patients with moderate to severe major depressive disorder (MDD) underwent T1-weighted brain magnetic resonance imaging (MRI) and a 90-minute [11C](R)-rolipram PET scan. For each subject, metabolite-corrected input function was measured from the radial artery. SVCA templates were obtained from 10 additional healthy subjects who underwent the same MRI and PET procedures. Cluster-IDIF was obtained as follows: 1) template mask images were created for carotid and surrounding tissue; 2) parametric image of weights for blood were created using SVCA; 3) mask images to the individual PET image were inversely normalized; 4) carotid and surrounding tissue time activity curves (TACs) were obtained from weighted and unweighted averages of each voxel activity in each mask, respectively; 5) partial volume effects and radiometabolites were corrected using individual arterial data at four points. Logan-distribution volume (VT/fP) values obtained by cluster-IDIF were similar to reference results obtained using arterial data, as well as those obtained using manual-IDIF; 39 of 51 subjects had a VT/fP error of <5%, and only one had error >10%. With automatic voxel selection, cluster-IDIF curves were less noisy than manual-IDIF and free of operator-related variability. Cluster-IDIF showed widespread decrease of about 20% [11C](R)-rolipram binding in the MDD group. Taken together, the results suggest that cluster-IDIF is a good alternative to full arterial input function for estimating Logan-VT/fP in [11C](R)-rolipram PET clinical

  20. Evaluation of supervised machine-learning algorithms to distinguish between inflammatory bowel disease and alimentary lymphoma in cats.

    PubMed

    Awaysheh, Abdullah; Wilcke, Jeffrey; Elvinger, François; Rees, Loren; Fan, Weiguo; Zimmerman, Kurt L

    2016-11-01

    Inflammatory bowel disease (IBD) and alimentary lymphoma (ALA) are common gastrointestinal diseases in cats. The very similar clinical signs and histopathologic features of these diseases make the distinction between them diagnostically challenging. We tested the use of supervised machine-learning algorithms to differentiate between the 2 diseases using data generated from noninvasive diagnostic tests. Three prediction models were developed using 3 machine-learning algorithms: naive Bayes, decision trees, and artificial neural networks. The models were trained and tested on data from complete blood count (CBC) and serum chemistry (SC) results for the following 3 groups of client-owned cats: normal, inflammatory bowel disease (IBD), or alimentary lymphoma (ALA). Naive Bayes and artificial neural networks achieved higher classification accuracy (sensitivities of 70.8% and 69.2%, respectively) than the decision tree algorithm (63%, p < 0.0001). The areas under the receiver-operating characteristic curve for classifying cases into the 3 categories was 83% by naive Bayes, 79% by decision tree, and 82% by artificial neural networks. Prediction models using machine learning provided a method for distinguishing between ALA-IBD, ALA-normal, and IBD-normal. The naive Bayes and artificial neural networks classifiers used 10 and 4 of the CBC and SC variables, respectively, to outperform the C4.5 decision tree, which used 5 CBC and SC variables in classifying cats into the 3 classes. These models can provide another noninvasive diagnostic tool to assist clinicians with differentiating between IBD and ALA, and between diseased and nondiseased cats.

  1. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    SciTech Connect

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  2. Automated Target Planning for FUSE Using the SOVA Algorithm

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, R. James; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2007-01-01

    The SOVA algorithm was originally developed under the Resilient Systems and Operations Project of the Engineering for Complex Systems Program from NASA s Aerospace Technology Enterprise as a conceptual framework to support real-time autonomous system mission and contingency management. The algorithm and its software implementation were formulated for generic application to autonomous flight vehicle systems, and its efficacy was demonstrated by simulation within the problem domain of Unmanned Aerial Vehicle autonomous flight management. The approach itself is based upon the precept that autonomous decision making for a very complex system can be made tractable by distillation of the system state to a manageable set of strategic objectives (e.g. maintain power margin, maintain mission timeline, and et cetera), which if attended to, will result in a favorable outcome. From any given starting point, the attainability of the end-states resulting from a set of candidate decisions is assessed by propagating a system model forward in time while qualitatively mapping simulated states into margins on strategic objectives using fuzzy inference systems. The expected return value of each candidate decision is evaluated as the product of the assigned value of the end-state with the assessed attainability of the end-state. The candidate decision yielding the highest expected return value is selected for implementation; thus, the approach provides a software framework for intelligent autonomous risk management. The name adopted for the technique incorporates its essential elements: Strategic Objective Valuation and Attainability (SOVA). Maximum value of the approach is realized for systems where human intervention is unavailable in the timeframe within which critical control decisions must be made. The Far Ultraviolet Spectroscopic Explorer (FUSE) satellite, launched in 1999, has been collecting science data for eight years.[1] At its beginning of life, FUSE had six gyros in two

  3. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    PubMed Central

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-01-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672

  4. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    NASA Astrophysics Data System (ADS)

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y.; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-09-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral.

  5. A New Avenue for Classification and Prediction of Olive Cultivars Using Supervised and Unsupervised Algorithms

    PubMed Central

    Beiki, Amir H.; Saboor, Saba; Ebrahimi, Mansour

    2012-01-01

    Various methods have been used to identify cultivares of olive trees; herein we used different bioinformatics algorithms to propose new tools to classify 10 cultivares of olive based on RAPD and ISSR genetic markers datasets generated from PCR reactions. Five RAPD markers (OPA0a21, OPD16a, OP01a1, OPD16a1 and OPA0a8) and five ISSR markers (UBC841a4, UBC868a7, UBC841a14, U12BC807a and UBC810a13) selected as the most important markers by all attribute weighting models. K-Medoids unsupervised clustering run on SVM dataset was fully able to cluster each olive cultivar to the right classes. All trees (176) induced by decision tree models generated meaningful trees and UBC841a4 attribute clearly distinguished between foreign and domestic olive cultivars with 100% accuracy. Predictive machine learning algorithms (SVM and Naïve Bayes) were also able to predict the right class of olive cultivares with 100% accuracy. For the first time, our results showed data mining techniques can be effectively used to distinguish between plant cultivares and proposed machine learning based systems in this study can predict new olive cultivars with the best possible accuracy. PMID:22957050

  6. Application of supervised machine learning algorithms for the classification of regulatory RNA riboswitches.

    PubMed

    Singh, Swadha; Singh, Raghvendra

    2017-03-01

    Riboswitches, the small structured RNA elements, were discovered about a decade ago. It has been the subject of intense interest to identify riboswitches, understand their mechanisms of action and use them in genetic engineering. The accumulation of genome and transcriptome sequence data and comparative genomics provide unprecedented opportunities to identify riboswitches in the genome. In the present study, we have evaluated the following six machine learning algorithms for their efficiency to classify riboswitches: J48, BayesNet, Naïve Bayes, Multilayer Perceptron, sequential minimal optimization, hidden Markov model (HMM). For determining effective classifier, the algorithms were compared on the statistical measures of specificity, sensitivity, accuracy, F-measure and receiver operating characteristic (ROC) plot analysis. The classifier Multilayer Perceptron achieved the best performance, with the highest specificity, sensitivity, F-score and accuracy, and with the largest area under the ROC curve, whereas HMM was the poorest performer. At present, the available tools for the prediction and classification of riboswitches are based on covariance model, support vector machine and HMM. The present study determines Multilayer Perceptron as a better classifier for the genome-wide riboswitch searches.

  7. Automated anatomical labeling algorithm of bronchial branches based on multi-slice CT images

    NASA Astrophysics Data System (ADS)

    Kawai, J.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Multi-slice CT technology was developed, so, we can get clear contrast images and thin slice images. But doctors need to diagnosis many image, thus their load increases. Therefore, development of the algorithm that analyses lung internal-organs is expected. When doctors diagnose lung internal-organs, they understand it. So, detailed analyze of lung internal-organs is applicant to early detection of a nodule. Especially, analyzing bronchus provides that useful information of detection of airway disease and classification of the pulmonary vein and artery. In this paper, we describe a method for automated anatomical labeling algorithm of bronchial branches based on Multi-Slice CT images.

  8. Automated anatomical labeling algorithm of bronchial branches based on multi-slice CT images

    NASA Astrophysics Data System (ADS)

    Kawai, J.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Kaneko, M.; Kusumoto, M.; Kakinuma, R.; Moriyama, N.

    2007-03-01

    Multi-slice CT technology was developed, so, we can get clear contrast images and thin slice images. But doctors need to diagnosis many image, thus their load increases. Therefore, development of the algorithm that analyses lung internal-organs is expected. When doctors diagnose lung internal-organs, they understand it. So, detailed analyze of lung internal-organs is applicant to early detection of a nodule. Especially, analyzing bronchus provides that useful information of detection of airway disease and classification of the pulmonary vein and artery. In this paper, we describe a method for automated anatomical labeling algorithm of bronchial branches based on Multi-Slice CT images.

  9. Clinical evaluation of the vector algorithm for neonatal hearing screening using automated auditory brainstem response.

    PubMed

    Keohane, Bernie M; Mason, Steve M; Baguley, David M

    2004-02-01

    A novel auditory brainstem response (ABR) detection and scoring algorithm, entitled the Vector algorithm is described. An independent clinical evaluation of the algorithm using 464 tests (120 non-stimulated and 344 stimulated tests) on 60 infants, with a mean age of approximately 6.5 weeks, estimated test sensitivity greater than 0.99 and test specificity at 0.87 for one test. Specificity was estimated to be greater than 0.95 for a two stage screen. Test times were of the order of 1.5 minutes per ear for detection of an ABR and 4.5 minutes per ear in the absence of a clear response. The Vector algorithm is commercially available for both automated screening and threshold estimation in hearing screening devices.

  10. The GOES-R ABI Wild Fire Automated Biomass Burning Algorithm

    NASA Astrophysics Data System (ADS)

    Hoffman, J.; Schmidt, C. C.; Prins, E. M.; Brunner, J. C.

    2011-12-01

    The global Wild Fire Automated Biomass Burning Algorithm (WF_ABBA) at the Cooperative Institute for Meteorological Satellite Studies (CIMSS) provides fire detection and characterization using data from a global constellation of geostationary satellites, currently including GOES, MTSAT, and Meteosat. CIMSS continues to enhance the legacy of the WF_ABBA by adapting the algorithm to utilize the advanced spatial, spectral, and temporal capabilities of GOES-R ABI. A wide range of simulated ABI data cases have been generated and processed with the GOES-R fire detection and characterization algorithm. Simulated cases included MODIS derived projections as well as model derived simulations that span a variety of satellite zenith angles and ecosystems. The GOES-R ABI fire product development focuses on active fire detection and sub-pixel characterization, including fire radiative power (FRP) and instantaneous fire size and temperature. With the algorithm delivered to the system contractor, the focus has moved to developing innovative new validation techniques.

  11. Improved automated monitoring and new analysis algorithm for circadian phototaxis rhythms in Chlamydomonas

    PubMed Central

    Gaskill, Christa; Forbes-Stovall, Jennifer; Kessler, Bruce; Young, Mike; Rinehart, Claire A.; Jacobshagen, Sigrid

    2010-01-01

    Automated monitoring of circadian rhythms is an efficient way of gaining insight into oscillation parameters like period and phase for the underlying pacemaker of the circadian clock. Measurement of the circadian rhythm of phototaxis (swimming towards light) exhibited by the green alga Chlamydomonas reinhardtii has been automated by directing a narrow and dim light beam through a culture at regular intervals and determining the decrease in light transmittance due to the accumulation of cells in the beam. In this study, the monitoring process was optimized by constructing a new computer-controlled measuring machine that limits the test beam to wavelengths reported to be specific for phototaxis and by choosing an algal strain, which does not need background illumination between test light cycles for proper expression of the rhythm. As a result, period and phase of the rhythm are now unaffected by the time a culture is placed into the machine. Analysis of the rhythm data was also optimized through a new algorithm, whose robustness was demonstrated using virtual rhythms with various noises. The algorithm differs in particular from other reported algorithms by maximizing the fit of the data to a sinusoidal curve that dampens exponentially. The algorithm was also used to confirm the reproducibility of rhythm monitoring by the machine. Machine and algorithm can now be used for a multitude of circadian clock studies that require unambiguous period and phase determinations such as light pulse experiments to identify the photoreceptor(s) that reset the circadian clock in C. reinhardtii. PMID:20116270

  12. Reliability of old and new ventricular fibrillation detection algorithms for automated external defibrillators

    PubMed Central

    Amann, Anton; Tratnig, Robert; Unterkofler, Karl

    2005-01-01

    Background A pivotal component in automated external defibrillators (AEDs) is the detection of ventricular fibrillation by means of appropriate detection algorithms. In scientific literature there exists a wide variety of methods and ideas for handling this task. These algorithms should have a high detection quality, be easily implementable, and work in real time in an AED. Testing of these algorithms should be done by using a large amount of annotated data under equal conditions. Methods For our investigation we simulated a continuous analysis by selecting the data in steps of one second without any preselection. We used the complete BIH-MIT arrhythmia database, the CU database, and the files 7001 – 8210 of the AHA database. All algorithms were tested under equal conditions. Results For 5 well-known standard and 5 new ventricular fibrillation detection algorithms we calculated the sensitivity, specificity, and the area under their receiver operating characteristic. In addition, two QRS detection algorithms were included. These results are based on approximately 330 000 decisions (per algorithm). Conclusion Our values for sensitivity and specificity differ from earlier investigations since we used no preselection. The best algorithm is a new one, presented here for the first time. PMID:16253134

  13. Taboo search algorithm for item assignment in synchronized zone automated order picking system

    NASA Astrophysics Data System (ADS)

    Wu, Yingying; Wu, Yaohua

    2014-07-01

    The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.

  14. Supervised machine learning algorithms to diagnose stress for vehicle drivers based on physiological sensor signals.

    PubMed

    Barua, Shaibal; Begum, Shahina; Ahmed, Mobyen Uddin

    2015-01-01

    Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing, and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data are difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.

  15. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  16. Dynamics of G-band bright points derived using two fully automated algorithms

    NASA Astrophysics Data System (ADS)

    Bodnárová, M.; Utz, D.; Rybák, J.; Hanslmeier, A.

    Small-scale magnetic field concentrations (˜ 1 kG) in the solar photosphere can be identified in the G-band of the solar spectrum as bright points. Study of the G-band bright points (GBPs) dynamics can help us in solving several questions related also to the coronal heating problem. Here a set of 142 G-band speckled images obtained using the Dutch Open Telescope (DOT) on October 19, 2005 are used to compare identification of the GBPs by two different fully automated identification algorithms: an algorithm developed by Utz et al. (2009a, 2009b) and an algorithm developed according to papers of Berger et al. (1995, 1998). Temporal and spatial tracking of the GBPs identified by both algorithms was performed resulting in distributions of lifetimes, sizes and velocities of the GBPs. The obtained results show that both algorithms give very similar values in the case of lifetime and velocity estimation of the GBPs, but they differ significantly in case of estimation of the GBPs sizes. This difference is caused by the fact that we have applied no additional exclusive criteria on the GBPs identified by the algorithm based on the work of Berger et al. (1995, 1998). Therefore we conclude that in a future study of the GBPs dynamics we will prefer to use the Utz's algorithm to perform identification and tracking of the GBPs in G-band images.

  17. CorPITA: An Automated Algorithm for the Identification and Analysis of Coronal "EIT Waves"

    NASA Astrophysics Data System (ADS)

    Long, D. M.; Bloomfield, D. S.; Gallagher, P. T.; Pérez-Suárez, D.

    2014-09-01

    The continuous stream of data available from the Atmospheric Imaging Assembly (AIA) telescopes onboard the Solar Dynamics Observatory (SDO) spacecraft has allowed a deeper understanding of the Sun. However, the sheer volume of data has necessitated the development of automated techniques to identify and analyse various phenomena. In this article, we describe the Coronal Pulse Identification and Tracking Algorithm ( CorPITA) for the identification and analysis of coronal "EIT waves". CorPITA uses an intensity-profile technique to identify the propagating pulse, tracking it throughout its evolution before returning estimates of its kinematics. The algorithm is applied here to a data set from February 2011, allowing its capabilities to be examined and critiqued. This algorithm forms part of the SDO Feature Finding Team initiative and will be implemented as part of the Heliophysics Event Knowledgebase (HEK). This is the first fully automated algorithm to identify and track the propagating "EIT wave" rather than any associated phenomenon and will allow a deeper understanding of this controversial phenomenon.

  18. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  19. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  20. Automated algorithm for actinic cheilitis diagnosis by wide-field fluorescence imaging.

    PubMed

    Cosci, Alessandro; Takahama, Ademar; Correr, Wagner Rafael; Azevedo, Rebeca Souza; Fontes, Karla Bianca Fernandes da Costa; Kurachi, Cristina

    2016-10-01

    Actinic cheilitis (AC) is a disease caused by prolonged and cumulative sun exposure that mostly affects the lower lip, which can progress to a lip squamous cell carcinoma. Routine diagnosis relies on clinician experience and training. We investigated the diagnostic efficacy of wide-field fluorescence imaging coupled to an automated algorithm for AC recognition. Fluorescence images were acquired from 57 patients with confirmed AC and 46 normal volunteers. Three different algorithms were employed: two based on the emission characteristics of local heterogeneity, entropy and intensity range, and one based on the number of objects after K-mean clustering. A classification model was obtained using a fivefold cross correlation algorithm. Sensitivity and specificity rates were 86% and 89.1%, respectively.

  1. An automated bladder volume measurement algorithm by pixel classification using random forests.

    PubMed

    Annangi, Pavan; Frigstad, Sigmund; Subin, S B; Torp, Anders; Ramasubramaniam, Sundararajan; Varna, Srinivas; Annangi, Pavan; Frigstad, Sigmund; Subin, S B; Torp, Anders; Ramasubramaniam, Sundararajan; Varna, Srinivas; Ramasubramaniam, Sundararajan; Torp, Anders; Varna, Srinivas; Subin, Sb; Annangi, Pavan; Frigstad, Sigmund

    2016-08-01

    Residual bladder volume measurement is a very important marker for patients with urinary retention problems. To be able to monitor patients with these conditions at the bedside by nurses or in an out patient setting by general physicians, hand held ultrasound devices will be extremely useful. However to increase the usage of these devices by non traditional users, automated tools that can aid them in the scanning and measurement process will be of great help. In our paper, we have developed a robust segmentation algorithm to automatically measure bladder volume by segmenting bladder contours from sagittal and transverse ultrasound views using a combination of machine learning and active contour algorithms. The algorithm is tested on 50 unseen images and 23 transverse and longitudinal image pairs and the performance is reported.

  2. Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud

    2012-11-01

    We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.

  3. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    PubMed

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  4. Algorithms for the Automated Power Systems Management. [for planetary exploration missions

    NASA Technical Reports Server (NTRS)

    Moser, R. L.; Imamura, M. S.

    1979-01-01

    The system breadboard for the demonstration of Automated Power Systems Management (APSM) functions has been designed, fabricated, and is in the final stages of verification and testing. APSM functions fall into categories of fault detection and correction, commanding system and subsystem test and diagnoses, relay position monitoring, data acquisition and processing, and power management. All these functions are accomplished through software residing in 8-bit microprocessors dedicated to each group of Viking Orbiter power breadboard subassemblies and a 16-bit microprocessor serving as the central processor for the power system. This paper describes key monitoring, diagnostic, and control algorithms used in the APSM breadboard.

  5. Fast automated yeast cell counting algorithm using bright-field and fluorescence microscopic images

    PubMed Central

    2013-01-01

    Background The faithful determination of the concentration and viability of yeast cells is important for biological research as well as industry. To this end, it is important to develop an automated cell counting algorithm that can provide not only fast but also accurate and precise measurement of yeast cells. Results With the proposed method, we measured the precision of yeast cell measurements by using 0%, 25%, 50%, 75% and 100% viability samples. As a result, the actual viability measured with the proposed yeast cell counting algorithm is significantly correlated to the theoretical viability (R2 = 0.9991). Furthermore, we evaluated the performance of our algorithm in various computing platforms. The results showed that the proposed algorithm could be feasible to use with low-end computing platforms without loss of its performance. Conclusions Our yeast cell counting algorithm can rapidly provide the total number and the viability of yeast cells with exceptional accuracy and precision. Therefore, we believe that our method can become beneficial for a wide variety of academic field and industries such as biotechnology, pharmaceutical and alcohol production. PMID:24215650

  6. Applying hybrid algorithms for text matching to automated biomedical vocabulary mapping.

    PubMed

    Nachimuthu, Senthil K; Lau, Lee Min

    2005-01-01

    Several biomedical vocabularies are often used by clinical applications due to their different domain(s) of coverage, intended use, etc. Mapping them to a reference terminology is essential for inter-systems interoperability. Manual vocabulary mapping is labor-intensive and allows room for inconsistencies. It requires manual searching for synonyms, abbreviation expansions, variations, etc., placing additional burden on the mappers. Furthermore, local vocabularies may use non-standard words and abbreviations, posing additional problems. However, much of this process can be automated to provide decision-support, allowing mappers to focus on steps that absolutely need their expertise. We developed hybrid algorithms comprising of rules, permutations, sequence alignment and cost algorithms that utilize the UMLS SPECIALIST Lexicon, a custom knowledgebase and a search engine to automatically find probable matches, allowing mappers to select the best match from this list. We discuss the techniques, results from assisting to map a local codeset, and scope for generalizability.

  7. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  8. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  9. Models for identification of erroneous atom-to-atom mapping of reactions performed by automated algorithms.

    PubMed

    Muller, Christophe; Marcou, Gilles; Horvath, Dragos; Aires-de-Sousa, João; Varnek, Alexandre

    2012-12-21

    Machine learning (SVM and JRip rule learner) methods have been used in conjunction with the Condensed Graph of Reaction (CGR) approach to identify errors in the atom-to-atom mapping of chemical reactions produced by an automated mapping tool by ChemAxon. The modeling has been performed on the three first enzymatic classes of metabolic reactions from the KEGG database. Each reaction has been converted into a CGR representing a pseudomolecule with conventional (single, double, aromatic, etc.) bonds and dynamic bonds characterizing chemical transformations. The ChemAxon tool was used to automatically detect the matching atom pairs in reagents and products. These automated mappings were analyzed by the human expert and classified as "correct" or "wrong". ISIDA fragment descriptors generated for CGRs for both correct and wrong mappings were used as attributes in machine learning. The learned models have been validated in n-fold cross-validation on the training set followed by a challenge to detect correct and wrong mappings within an external test set of reactions, never used for learning. Results show that both SVM and JRip models detect most of the wrongly mapped reactions. We believe that this approach could be used to identify erroneous atom-to-atom mapping performed by any automated algorithm.

  10. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  11. Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk

    PubMed Central

    Johannesen, Lars; Vicente, Jose; Hosseini, Meisam; Strauss, David G.

    2016-01-01

    Background Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-Tpeakc (J-Tpeakc) and Tpeak-Tend intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). Methods In this work, we describe an automated measurement methodology for assessment of the J-Tpeakc and Tpeak-Tend intervals using the vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Results Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. Conclusions We have developed an automated algorithm for assessment of J-Tpeakc and Tpeak-Tend intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. The algorithm is being released as open-source software. Trial Registration NCT02308748 and NCT01873950 PMID:28036330

  12. Automated Reconstruction Algorithm for Identification of 3D Architectures of Cribriform Ductal Carcinoma In Situ

    PubMed Central

    Norton, Kerri-Ann; Namazi, Sameera; Barnard, Nicola; Fujibayashi, Mariko; Bhanot, Gyan; Ganesan, Shridar; Iyatomi, Hitoshi; Ogawa, Koichi; Shinbrot, Troy

    2012-01-01

    Ductal carcinoma in situ (DCIS) is a pre-invasive carcinoma of the breast that exhibits several distinct morphologies but the link between morphology and patient outcome is not clear. We hypothesize that different mechanisms of growth may still result in similar 2D morphologies, which may look different in 3D. To elucidate the connection between growth and 3D morphology, we reconstruct the 3D architecture of cribriform DCIS from resected patient material. We produce a fully automated algorithm that aligns, segments, and reconstructs 3D architectures from microscopy images of 2D serial sections from human specimens. The alignment algorithm is based on normalized cross correlation, the segmentation algorithm uses histogram equilization, Otsu's thresholding, and morphology techniques to segment the duct and cribra. The reconstruction method combines these images in 3D. We show that two distinct 3D architectures are indeed found in samples whose 2D histological sections are similarly identified as cribriform DCIS. These differences in architecture support the hypothesis that luminal spaces may form due to different mechanisms, either isolated cell death or merging fronds, leading to the different architectures. We find that out of 15 samples, 6 were found to have ‘bubble-like’ cribra, 6 were found to have ‘tube-like’ criba and 3 were ‘unknown.’ We propose that the 3D architectures found, ‘bubbles’ and ‘tubes’, account for some of the heterogeneity of the disease and may be prognostic indicators of different patient outcomes. PMID:22970156

  13. Automated Analysis of 1p/19q Status by FISH in Oligodendroglial Tumors: Rationale and Proposal of an Algorithm

    PubMed Central

    Duval, Céline; de Tayrac, Marie; Michaud, Karine; Cabillic, Florian; Paquet, Claudie; Gould, Peter Vincent; Saikali, Stéphan

    2015-01-01

    Objective To propose a new algorithm facilitating automated analysis of 1p and 19q status by FISH technique in oligodendroglial tumors with software packages available in the majority of institutions using this technique. Methods We documented all green/red (G/R) probe signal combinations in a retrospective series of 53 oligodendroglial tumors according to literature guidelines (Algorithm 1) and selected only the most significant combinations for a new algorithm (Algorithm 2). This second algorithm was then validated on a prospective internal series of 45 oligodendroglial tumors and on an external series of 36 gliomas. Results Algorithm 2 utilizes 24 G/R combinations which represent less than 40% of combinations observed with Algorithm 1. The new algorithm excludes some common G/R combinations (1/1, 3/2) and redefines the place of others (defining 1/2 as compatible with normal and 3/3, 4/4 and 5/5 as compatible with imbalanced chromosomal status). The new algorithm uses the combination + ratio method of signal probe analysis to give the best concordance between manual and automated analysis on samples of 100 tumor cells (91% concordance for 1p and 89% concordance for 19q) and full concordance on samples of 200 tumor cells. This highlights the value of automated analysis as a means to identify cases in which a larger number of tumor cells should be studied by manual analysis. Validation of this algorithm on a second series from another institution showed a satisfactory concordance (89%, κ = 0.8). Conclusion Our algorithm can be easily implemented on all existing FISH analysis software platforms and should facilitate multicentric evaluation and standardization of 1p/19q assessment in gliomas with reduction of the professional and technical time required. PMID:26135922

  14. Alignment, segmentation and 3-D reconstruction of serial sections based on automated algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Weiguo; Tang, Shaojie; Xu, Qiong; Lian, Qin; Wang, Jin; Li, Dichen

    2012-12-01

    A well-defined three-dimensional (3-D) reconstruction of bone-cartilage transitional structures is crucial for the osteochondral restoration. This paper presents an accurate, computationally efficient and fully-automated algorithm for the alignment and segmentation of two-dimensional (2-D) serial to construct the 3-D model of bone-cartilage transitional structures. Entire system includes the following five components: (1) image harvest, (2) image registration, (3) image segmentation, (4) 3-D reconstruction and visualization, and (5) evaluation. A computer program was developed in the environment of Matlab for the automatic alignment and segmentation of serial sections. Automatic alignment algorithm based on the position's cross-correlation of the anatomical characteristic feature points of two sequential sections. A method combining an automatic segmentation and an image threshold processing was applied to capture the regions and structures of interest. SEM micrograph and 3-D model reconstructed directly in digital microscope were used to evaluate the reliability and accuracy of this strategy. The morphology of 3-D model constructed by serial sections is consistent with the results of SEM micrograph and 3-D model of digital microscope.

  15. Bayesian supervised dimensionality reduction.

    PubMed

    Gönen, Mehmet

    2013-12-01

    Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.

  16. National Automated Surveillance of Hospital-Acquired Bacteremia in Denmark Using a Computer Algorithm.

    PubMed

    Gubbels, Sophie; Nielsen, Jens; Voldstedlund, Marianne; Kristensen, Brian; Schønheyder, Henrik C; Ellermann-Eriksen, Svend; Engberg, Jørgen H; Møller, Jens K; Østergaard, Christian; Mølbak, Kåre

    2017-03-09

    BACKGROUND In 2015, Denmark launched an automated surveillance system for hospital-acquired infections, the Hospital-Acquired Infections Database (HAIBA). OBJECTIVE To describe the algorithm used in HAIBA, to determine its concordance with point prevalence surveys (PPSs), and to present trends for hospital-acquired bacteremia SETTING Private and public hospitals in Denmark METHODS A hospital-acquired bacteremia case was defined as at least 1 positive blood culture with at least 1 pathogen (bacterium or fungus) taken between 48 hours after admission and 48 hours after discharge, using the Danish Microbiology Database and the Danish National Patient Registry. PPSs performed in 2012 and 2013 were used for comparison. RESULTS National trends showed an increase in HA bacteremia cases between 2010 and 2014. Incidence was higher for men than women (9.6 vs 5.4 per 10,000 risk days) and was highest for those aged 61-80 years (9.5 per 10,000 risk days). The median daily prevalence was 3.1% (range, 2.1%-4.7%). Regional incidence varied from 6.1 to 8.1 per 10,000 risk days. The microorganisms identified were typical for HA bacteremia. Comparison of HAIBA with PPS showed a sensitivity of 36% and a specificity of 99%. HAIBA was less sensitive for patients in hematology departments and intensive care units. Excluding these departments improved the sensitivity of HAIBA to 44%. CONCLUSIONS Although the estimated sensitivity of HAIBA compared with PPS is low, a PPS is not a gold standard. Given the many advantages of automated surveillance, HAIBA allows monitoring of HA bacteremia across the healthcare system, supports prioritizing preventive measures, and holds promise for evaluating interventions. Infect Control Hosp Epidemiol 2017;1-8.

  17. An Automated Reference Frame Selection (ARFS) Algorithm for Cone Imaging with Adaptive Optics Scanning Light Ophthalmoscopy

    PubMed Central

    Salmon, Alexander E.; Cooper, Robert F.; Langlo, Christopher S.; Baghaie, Ahmadreza; Dubra, Alfredo; Carroll, Joseph

    2017-01-01

    Purpose To develop an automated reference frame selection (ARFS) algorithm to replace the subjective approach of manually selecting reference frames for processing adaptive optics scanning light ophthalmoscope (AOSLO) videos of cone photoreceptors. Methods Relative distortion was measured within individual frames before conducting image-based motion tracking and sorting of frames into distinct spatial clusters. AOSLO images from nine healthy subjects were processed using ARFS and human-derived reference frames, then aligned to undistorted AO-flood images by nonlinear registration and the registration transformations were compared. The frequency at which humans selected reference frames that were rejected by ARFS was calculated in 35 datasets from healthy subjects, and subjects with achromatopsia, albinism, or retinitis pigmentosa. The level of distortion in this set of human-derived reference frames was assessed. Results The average transformation vector magnitude required for registration of AOSLO images to AO-flood images was significantly reduced from 3.33 ± 1.61 pixels when using manual reference frame selection to 2.75 ± 1.60 pixels (mean ± SD) when using ARFS (P = 0.0016). Between 5.16% and 39.22% of human-derived frames were rejected by ARFS. Only 2.71% to 7.73% of human-derived frames were ranked in the top 5% of least distorted frames. Conclusion ARFS outperforms expert observers in selecting minimally distorted reference frames in AOSLO image sequences. The low success rate in human frame choice illustrates the difficulty in subjectively assessing image distortion. Translational Relevance Manual reference frame selection represented a significant barrier to a fully automated image-processing pipeline (including montaging, cone identification, and metric extraction). The approach presented here will aid in the clinical translation of AOSLO imaging. PMID:28392976

  18. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  19. Automated SNP genotype clustering algorithm to improve data completeness in high-throughput SNP genotyping datasets from custom arrays.

    PubMed

    Smith, Edward M; Littrell, Jack; Olivier, Michael

    2007-12-01

    High-throughput SNP genotyping platforms use automated genotype calling algorithms to assign genotypes. While these algorithms work efficiently for individual platforms, they are not compatible with other platforms, and have individual biases that result in missed genotype calls. Here we present data on the use of a second complementary SNP genotype clustering algorithm. The algorithm was originally designed for individual fluorescent SNP genotyping assays, and has been optimized to permit the clustering of large datasets generated from custom-designed Affymetrix SNP panels. In an analysis of data from a 3K array genotyped on 1,560 samples, the additional analysis increased the overall number of genotypes by over 45,000, significantly improving the completeness of the experimental data. This analysis suggests that the use of multiple genotype calling algorithms may be advisable in high-throughput SNP genotyping experiments. The software is written in Perl and is available from the corresponding author.

  20. An efficient automated algorithm to detect ocular surface temperature on sequence of thermograms using snake and target tracing function.

    PubMed

    Tan, Jen Hong; Ng, E Y K; Acharya U, Rajendra

    2011-10-01

    Functional infrared (IR) imaging is widely adopted in medical field nowadays, with more emphasis on breast cancer and ocular abnormalities. In this article, an algorithm is presented to accurately locate the eye and cornea in ocular thermographic sequences, which were recorded utilizing functional infrared imaging. The localization is achieved by snake algorithm coupled with a newly proposed target tracing function. The target tracing function enables automated localization, allows the absence of any manual assistance before the algorithm runs. Genetic algorithm is used to perform the search for global minimum on the function to produce desired localization. On all the cases we have studied, in average the region encircled by the algorithm covers 92% of the true ocular region. As for the non-ocular region covered, it only accounts for less than 5% of the encircled region.

  1. Supervised Autonomy

    ERIC Educational Resources Information Center

    Sexton, Patrick; Levy, Linda S.; Willeford, K. Sean; Barnum, Mary G.; Gardner, Greg; Guyer, M. Susan; Fincher, A. Louise

    2009-01-01

    Objective: The primary objective of this paper is to present the evolution, purpose, and definition of direct supervision in the athletic training clinical education. The secondary objective is to briefly present the factors that may negatively affect the quality of direct supervision to allow remediation and provide higher quality clinical…

  2. Automated beam placement for breast radiotherapy using a support vector machine based algorithm

    SciTech Connect

    Zhao Xuan; Kong, Dewen; Jozsef, Gabor; Chang, Jenghwa; Wong, Edward K.; Formenti, Silvia C.; Wang Yao

    2012-05-15

    Purpose: To develop an automated beam placement technique for whole breast radiotherapy using tangential beams. We seek to find optimal parameters for tangential beams to cover the whole ipsilateral breast (WB) and minimize the dose to the organs at risk (OARs). Methods: A support vector machine (SVM) based method is proposed to determine the optimal posterior plane of the tangential beams. Relative significances of including/avoiding the volumes of interests are incorporated into the cost function of the SVM. After finding the optimal 3-D plane that separates the whole breast (WB) and the included clinical target volumes (CTVs) from the OARs, the gantry angle, collimator angle, and posterior jaw size of the tangential beams are derived from the separating plane equation. Dosimetric measures of the treatment plans determined by the automated method are compared with those obtained by applying manual beam placement by the physicians. The method can be further extended to use multileaf collimator (MLC) blocking by optimizing posterior MLC positions. Results: The plans for 36 patients (23 prone- and 13 supine-treated) with left breast cancer were analyzed. Our algorithm reduced the volume of the heart that receives >500 cGy dose (V5) from 2.7 to 1.7 cm{sup 3} (p = 0.058) on average and the volume of the ipsilateral lung that receives >1000 cGy dose (V10) from 55.2 to 40.7 cm{sup 3} (p = 0.0013). The dose coverage as measured by volume receiving >95% of the prescription dose (V95%) of the WB without a 5 mm superficial layer decreases by only 0.74% (p = 0.0002) and the V95% for the tumor bed with 1.5 cm margin remains unchanged. Conclusions: This study has demonstrated the feasibility of using a SVM-based algorithm to determine optimal beam placement without a physician's intervention. The proposed method reduced the dose to OARs, especially for supine treated patients, without any relevant degradation of dose homogeneity and coverage in general.

  3. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  4. On the implementation of an automated acoustic output optimization algorithm for subharmonic aided pressure estimation

    PubMed Central

    Dave, J. K.; Halldorsdottir, V. G.; Eisenbrey, J. R.; Merton, D. A.; Liu, J. B.; Machado, P.; Zhao, H.; Park, S.; Dianis, S.; Chalek, C. L.; Thomenius, K. E.; Brown, D. B.; Forsberg, F.

    2013-01-01

    Incident acoustic output (IAO) dependent subharmonic signal amplitudes from ultrasound contrast agents can be categorized into occurrence, growth or saturation stages. Subharmonic aided pressure estimation (SHAPE) is a technique that utilizes growth stage subharmonic signal amplitudes for hydrostatic pressure estimation. In this study, we developed an automated IAO optimization algorithm to identify the IAO level eliciting growth stage subharmonic signals and also studied the effect of pulse length on SHAPE. This approach may help eliminate the problems of acquiring and analyzing the data offline at all IAO levels as was done in previous studies and thus, pave the way for real-time clinical pressure monitoring applications. The IAO optimization algorithm was implemented on a Logiq 9 (GE Healthcare, Milwaukee, WI) scanner interfaced with a computer. The optimization algorithm stepped the ultrasound scanner from 0 to 100 % IAO. A logistic equation fitting function was applied with the criterion of minimum least squared error between the fitted subharmonic amplitudes and the measured subharmonic amplitudes as a function of the IAO levels and the optimum IAO level was chosen corresponding to the inflection point calculated from the fitted data. The efficacy of the optimum IAO level was investigated for in vivo SHAPE to monitor portal vein (PV) pressures in 5 canines and was compared with the performance of IAO levels, below and above the optimum IAO level, for 4, 8 and 16 transmit cycles. The canines received a continuous infusion of Sonazoid microbubbles (1.5 μl/kg/min; GE Healthcare, Oslo, Norway). PV pressures were obtained using a surgically introduced pressure catheter (Millar Instruments, Inc., Houston, TX) and were recorded before and after increasing PV pressures. The experiments showed that optimum IAO levels for SHAPE in the canines ranged from 6 to 40 %. The best correlation between changes in PV pressures and in subharmonic amplitudes (r = -0.76; p = 0

  5. Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.

    2011-01-01

    Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of

  6. Modelling molecule-surface interactions--an automated quantum-classical approach using a genetic algorithm.

    PubMed

    Herbers, Claudia R; Johnston, Karen; van der Vegt, Nico F A

    2011-06-14

    We present an automated and efficient method to develop force fields for molecule-surface interactions. A genetic algorithm (GA) is used to parameterise a classical force field so that the classical adsorption energy landscape of a molecule on a surface matches the corresponding landscape from density functional theory (DFT) calculations. The procedure performs a sophisticated search in the parameter phase space and converges very quickly. The method is capable of fitting a significant number of structures and corresponding adsorption energies. Water on a ZnO(0001) surface was chosen as a benchmark system but the method is implemented in a flexible way and can be applied to any system of interest. In the present case, pairwise Lennard Jones (LJ) and Coulomb potentials are used to describe the molecule-surface interactions. In the course of the fitting procedure, the LJ parameters are refined in order to reproduce the adsorption energy landscape. The classical model is capable of describing a wide range of energies, which is essential for a realistic description of a fluid-solid interface.

  7. Automated Transient Recovery Algorithm using Discrete Zernike Polynomials on Image-Subtracted Data

    NASA Astrophysics Data System (ADS)

    Ackley, Kendall; Eikenberry, Stephen S.; Klimenko, Sergey

    2016-01-01

    We present an unsupervised algorithm for the automated identification of astrophysical transients recovered through image subtraction techniques. We use a set of discrete Zernike polynomials to decompose and characterize residual energy discovered in the final subtracted image, identifying candidate sources which appear point-like in nature. This work is motivated for use in collaboration with Advanced gravitational wave (GW) interferometers, such as Advanced LIGO and Virgo, where multiwavelength electromagnetic (EM) emission is expected in parallel with gravitational radiation from compact binary object mergers of neutron stars (NS-NS) and stellar-mass black holes (NS-BH). Imaging an EM counterpart coincident with a GW trigger will help to constrain the multi-dimensional GW parameter space as well as aid in the resolution of long-standing astrophysical mysteries, such as the true nature of the progenitor relationship between short-duration GRBs and massive compact binary mergers. We are working on making our method an open-source package optimized for low-latency response for community use during the upcoming era of GW astronomy.

  8. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    USGS Publications Warehouse

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  9. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  10. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  11. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study.

    PubMed

    Rudyanto, Rina D; Kerkstra, Sjoerd; van Rikxoort, Eva M; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, Ilkay; Ünay, Devrim; Kadipaşaoğlu, Kamuran; Estépar, Raúl San José; Ross, James C; Washko, George R; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C; Fabijanska, Anna; Smistad, Erik; Elster, Anne C; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G H; Campo, Arantza; Prokop, Mathias; de Jong, Pim A; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2014-10-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.

  12. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    NASA Technical Reports Server (NTRS)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  13. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells

    PubMed Central

    Park, Han Sang; Rinehart, Matthew T.; Walzer, Katelyn A.; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection

  14. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  15. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    PubMed Central

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or

  16. Design and demonstration of automated data analysis algorithms for ultrasonic inspection of complex composite panels with bonds

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2016-02-01

    To address the data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms have been developed to make calls on indications that satisfy the detection criteria and minimize false calls. The original design followed standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. However, certain complex panels with varying shape, ply drops and the presence of bonds can complicate this interpretation process. In this paper, enhancements to the automated data analysis algorithms are introduced to address these challenges. To estimate the thickness of the part and presence of bonds without prior information, an algorithm tracks potential backwall or bond-line signals, and evaluates a combination of spatial, amplitude, and time-of-flight metrics to identify bonded sections. Once part boundaries, thickness transitions and bonded regions are identified, feature extraction algorithms are applied to multiple sets of through-thickness and backwall C-scan images, for evaluation of both first layer through thickness and layers under bonds. ADA processing results are presented for a variety of complex test specimens with inserted materials and other test discontinuities. Lastly, enhancements to the ADA software interface are presented, which improve the software usability for final data review by the inspectors and support the certification process.

  17. Weakly supervised glasses removal

    NASA Astrophysics Data System (ADS)

    Wang, Zhicheng; Zhou, Yisu; Wen, Lijie

    2015-03-01

    Glasses removal is an important task on face recognition, in this paper, we provide a weakly supervised method to remove eyeglasses from an input face image automatically. We choose sparse coding as face reconstruction method, and optical flow to find exact shape of glasses. We combine the two processes iteratively to remove glasses more accurately. The experimental results reveal that our method works much better than these algorithms alone, and it can remove various glasses to obtain natural looking glassless facial images.

  18. Automated infrasound signal detection algorithms implemented in MatSeis - Infra Tool.

    SciTech Connect

    Hart, Darren

    2004-07-01

    MatSeis's infrasound analysis tool, Infra Tool, uses frequency slowness processing to deconstruct the array data into three outputs per processing step: correlation, azimuth and slowness. Until now, an experienced analyst trained to recognize a pattern observed in outputs from signal processing manually accomplished infrasound signal detection. Our goal was to automate the process of infrasound signal detection. The critical aspect of infrasound signal detection is to identify consecutive processing steps where the azimuth is constant (flat) while the time-lag correlation of the windowed waveform is above background value. These two statements describe the arrival of a correlated set of wavefronts at an array. The Hough Transform and Inverse Slope methods are used to determine the representative slope for a specified number of azimuth data points. The representative slope is then used in conjunction with associated correlation value and azimuth data variance to determine if and when an infrasound signal was detected. A format for an infrasound signal detection output file is also proposed. The detection output file will list the processed array element names, followed by detection characteristics for each method. Each detection is supplied with a listing of frequency slowness processing characteristics: human time (YYYY/MM/DD HH:MM:SS.SSS), epochal time, correlation, fstat, azimuth (deg) and trace velocity (km/s). As an example, a ground truth event was processed using the four-element DLIAR infrasound array located in New Mexico. The event is known as the Watusi chemical explosion, which occurred on 2002/09/28 at 21:25:17 with an explosive yield of 38,000 lb TNT equivalent. Knowing the source and array location, the array-to-event distance was computed to be approximately 890 km. This test determined the station-to-event azimuth (281.8 and 282.1 degrees) to within 1.6 and 1.4 degrees for the Inverse Slope and Hough Transform detection algorithms, respectively, and

  19. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms

    PubMed Central

    Pierce, Larry A.; Byrd, Darrin W.; Elston, Brian F.; Karp, Joel S.; Sunderland, John J.; Kinahan, Paul E.

    2016-01-01

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for inter-user variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plugin to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows runtimes between 3 and 4 minutes, and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and max values on the order of 0.5% and 2% respectively over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC. PMID:26894356

  20. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    PubMed

    Pierce, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-01

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows runtimes between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC. PACS number: 87.57.C.

  1. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    PubMed

    Pierce Ii, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-08

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.

  2. An Automated Cropland Classification Algorithm (ACCA) for Tajikistan by combining Landsat, MODIS, and secondary data

    USGS Publications Warehouse

    Thenkabail, Prasad S.; Wu, Zhuoting

    2012-01-01

    The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA) that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed) over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs) involving data from Landsat Global Land Survey (GLS), Landsat Enhanced Thematic Mapper Plus (ETM+) 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature), and in situ data. First, the process involved producing an accurate reference (or truth) cropland layer (TCL), consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005). The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI) maximum value composites (MVC) time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan). Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs) match accurately with TCLs. Third, the ACCA derived cropland

  3. Quantitative mapping of hemodynamics in the lung, brain, and dorsal window chamber-grown tumors using a novel, automated algorithm

    PubMed Central

    Fontanella, Andrew N.; Schroeder, Thies; Hochman, Daryl W.; Chen, Raymond E.; Hanna, Gabi; Haglund, Michael M.; Secomb, Timothy W.; Palmer, Gregory M.; Dewhirst, Mark W.

    2013-01-01

    Hemodynamic properties of vascular beds are of great interest in a variety of clinical and laboratory settings. However, there presently exists no automated, accurate, technically simple method for generating blood velocity maps of complex microvessel networks. Here we present a novel algorithm that addresses this problem by applying pixel-by-pixel cross-correlation to video data. Temporal signals at every spatial coordinate are compared with signals at neighboring points, generating a series of correlation maps from which speed and direction are calculated. User assisted definition of vessel geometries is not required, and sequential data are analyzed automatically, without user bias. Velocity measurements are validated against the dual-slit method and against capillary flow with known velocities. The algorithm is tested in three different biological models. Along with simultaneously acquired hemoglobin saturation and vascular geometry information, the hemodynamic maps presented here demonstrate an accurate, quantitative method of analyzing dynamic vascular systems. PMID:23781901

  4. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    SciTech Connect

    Reyhan, M; Yue, N

    2014-06-01

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation. Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help

  5. Automating "Word of Mouth" to Recommend Classes to Students: An Application of Social Information Filtering Algorithms

    ERIC Educational Resources Information Center

    Booker, Queen Esther

    2009-01-01

    An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…

  6. Semi-automated algorithm for localization of dermal/epidermal junction in reflectance confocal microscopy images of human skin

    NASA Astrophysics Data System (ADS)

    Kurugol, Sila; Dy, Jennifer G.; Rajadhyaksha, Milind; Gossage, Kirk W.; Weissmann, Jesse; Brooks, Dana H.

    2011-03-01

    The examination of the dermis/epidermis junction (DEJ) is clinically important for skin cancer diagnosis. Reflectance confocal microscopy (RCM) is an emerging tool for detection of skin cancers in vivo. However, visual localization of the DEJ in RCM images, with high accuracy and repeatability, is challenging, especially in fair skin, due to low contrast, heterogeneous structure and high inter- and intra-subject variability. We recently proposed a semi-automated algorithm to localize the DEJ in z-stacks of RCM images of fair skin, based on feature segmentation and classification. Here we extend the algorithm to dark skin. The extended algorithm first decides the skin type and then applies the appropriate DEJ localization method. In dark skin, strong backscatter from the pigment melanin causes the basal cells above the DEJ to appear with high contrast. To locate those high contrast regions, the algorithm operates on small tiles (regions) and finds the peaks of the smoothed average intensity depth profile of each tile. However, for some tiles, due to heterogeneity, multiple peaks in the depth profile exist and the strongest peak might not be the basal layer peak. To select the correct peak, basal cells are represented with a vector of texture features. The peak with most similar features to this feature vector is selected. The results show that the algorithm detected the skin types correctly for all 17 stacks tested (8 fair, 9 dark). The DEJ detection algorithm achieved an average distance from the ground truth DEJ surface of around 4.7μm for dark skin and around 7-14μm for fair skin.

  7. Microcounselling Supervision: An Innovative Integrated Supervision Model

    ERIC Educational Resources Information Center

    Russell-Chapin, Lori A.; Ivey, Allen E.

    2004-01-01

    This article introduces a new integrated model of counselling supervision entitled the Microcounselling Supervision Model (MSM). This type of supervision is designed for supervisors and supervisees who favor eclecticism and work from multiple theoretical orientations. MSM successfully combines skills from various theories and supervision models by…

  8. Modeling pilot interaction with automated digital avionics systems: Guidance and control algorithms for contour and nap-of-the-Earth flight

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1990-01-01

    A collection of technical papers are presented that cover modeling pilot interaction with automated digital avionics systems and guidance and control algorithms for contour and nap-of-the-earth flight. The titles of the papers presented are as follows: (1) Automation effects in a multiloop manual control system; (2) A qualitative model of human interaction with complex dynamic systems; (3) Generalized predictive control of dynamic systems; (4) An application of generalized predictive control to rotorcraft terrain-following flight; (5) Self-tuning generalized predictive control applied to terrain-following flight; and (6) Precise flight path control using a predictive algorithm.

  9. Validation study of automated dermal/epidermal junction localization algorithm in reflectance confocal microscopy images of skin

    NASA Astrophysics Data System (ADS)

    Kurugol, Sila; Rajadhyaksha, Milind; Dy, Jennifer G.; Brooks, Dana H.

    2012-02-01

    Reflectance confocal microscopy (RCM) has seen increasing clinical application for noninvasive diagnosis of skin cancer. Identifying the location of the dermal-epidermal junction (DEJ) in the image stacks is key for effective clinical imaging. For example, one clinical imaging procedure acquires a dense stack of 0.5x0.5mm FOV images and then, after manual determination of DEJ depth, collects a 5x5mm mosaic at that depth for diagnosis. However, especially in lightly pigmented skin, RCM images have low contrast at the DEJ which makes repeatable, objective visual identification challenging. We have previously published proof of concept for an automated algorithm for DEJ detection in both highly- and lightly-pigmented skin types based on sequential feature segmentation and classification. In lightly-pigmented skin the change of skin texture with depth was detected by the algorithm and used to locate the DEJ. Here we report on further validation of our algorithm on a more extensive collection of 24 image stacks (15 fair skin, 9 dark skin). We compare algorithm performance against classification by three clinical experts. We also evaluate inter-expert consistency among the experts. The average correlation across experts was 0.81 for lightly pigmented skin, indicating the difficulty of the problem. The algorithm achieved epidermis/dermis misclassification rates smaller than 10% (based on 25x25 mm tiles) and average distance from the expert labeled boundaries of ~6.4 μm for fair skin and ~5.3 μm for dark skin, well within average cell size and less than 2x the instrument resolution in the optical axis.

  10. Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW

    NASA Astrophysics Data System (ADS)

    Wang, Zhen Zhou; Zhang, Yu Ming

    2007-07-01

    Controlled metal transfer in gas metal arc welding (GMAW) implies controllable weld quality. To understand, analyse and control the metal transfer process, the droplet should be monitored and tracked. To process the metal transfer images in double-electrode GMAW (DE-GMAW), a novel modification of GMAW, a brightness-based algorithm is proposed to locate the droplet and compute the droplet size automatically. Although this algorithm can locate the droplet with adequate accuracy, its accuracy in droplet size computation needs improvements. To this end, the correlation among adjacent images due to the droplet development is taken advantage of to improve the algorithm. Experimental results verified that the improved algorithm can automatically locate the droplets and compute the droplet size with an adequate accuracy.

  11. Robust algorithms for automated chemical shift calibration of 1D 1H NMR spectra of blood serum.

    PubMed

    Pearce, Jake T M; Athersuch, Toby J; Ebbels, Timothy M D; Lindon, John C; Nicholson, Jeremy K; Keun, Hector C

    2008-09-15

    In biofluid NMR spectroscopy, the frequency of each resonance is typically calibrated by addition of a reference compound such as 3-(trimethylsilyl)-propionic acid- d 4 (TSP) to the sample. However biofluids such as serum cannot be referenced to TSP, due to shifts resonance caused by binding to macromolecules in solution. In order to overcome this limitation we have developed algorithms, based on analysis of derivative spectra, to locate and calibrate (1)H NMR spectra to the alpha-glucose anomeric doublet. We successfully used these algorithms to calibrate 77 serum (1)H NMR spectra and demonstrate the greater reproducibility of the calculated chemical-shift corrections ( r = 0.97) than those generated by manual alignment ( r = 0.8-0.88). Hence we show that these algorithms provide robust and reproducible methods of calibrating (1)H NMR of serum, plasma, or any biofluid in which glucose is abundant. Precise automated calibration of complex biofluid NMR spectra is an important tool in large-scale metabonomic or metabolomic studies, where hundreds or even thousands of spectra may be analyzed in high-resolution by pattern recognition analysis.

  12. A statistical-based scheduling algorithm in automated data path synthesis

    NASA Technical Reports Server (NTRS)

    Jeon, Byung Wook; Lursinsap, Chidchanok

    1992-01-01

    In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.

  13. Statistical Studies of Flux Transfer Events Using Unsupervised and Supervised Techniques

    NASA Astrophysics Data System (ADS)

    Driscoll, J.; Sipes, T. B.; Karimabadi, H.; Sibeck, D. G.; Korotova, G. I.

    2006-12-01

    We report preliminary results concerning the combined use of unsupervised and supervised techniques to classify Geotail FTEs. Currently, humans identify FTEs on the basis of clear isolated bipolar signatures normal to the nominal magnetopause, magnetic field strength enhancements, and sometimes east/west deflections of the magnetic field in the plane of the magnetopause BM. However, events with decreases or crater-like structures in the magnetic field strength, no east/west deflection, and asymmetric or continuous variations normal to the magnetopause have also been identified as FTEs, making statistical studies of FTEs problematical. Data mining techniques are particularly useful in developing automated search algorithms and generating large event lists for statistical studies. Data mining techniques can be divided into two types, supervised and unsupervised. In supervised algorithms, one teaches the algorithm using examples from labeled data. Considering the case of FTEs, the user would provide examples of FTEs as well as examples of non-FTEs and label (as FTE or non-FTE) the data. Since one has to start with a labeled data set, this may already include a user bias in the selection process. To avoid this issue, it can be useful to employ unsupervised techniques. Unsupervised techniques are analogous to training without a teacher: data are not labeled. There is also hybrid modeling where one makes several models, using unsupervised and supervised techniques and then connects them into a hybrid model.

  14. Developing and evaluating an automated appendicitis risk stratification algorithm for pediatric patients in the emergency department

    PubMed Central

    Deleger, Louise; Brodzinski, Holly; Zhai, Haijun; Li, Qi; Lingren, Todd; Kirkendall, Eric S; Alessandrini, Evaline; Solti, Imre

    2013-01-01

    Objective To evaluate a proposed natural language processing (NLP) and machine-learning based automated method to risk stratify abdominal pain patients by analyzing the content of the electronic health record (EHR). Methods We analyzed the EHRs of a random sample of 2100 pediatric emergency department (ED) patients with abdominal pain, including all with a final diagnosis of appendicitis. We developed an automated system to extract relevant elements from ED physician notes and lab values and to automatically assign a risk category for acute appendicitis (high, equivocal, or low), based on the Pediatric Appendicitis Score. We evaluated the performance of the system against a manually created gold standard (chart reviews by ED physicians) for recall, specificity, and precision. Results The system achieved an average F-measure of 0.867 (0.869 recall and 0.863 precision) for risk classification, which was comparable to physician experts. Recall/precision were 0.897/0.952 in the low-risk category, 0.855/0.886 in the high-risk category, and 0.854/0.766 in the equivocal-risk category. The information that the system required as input to achieve high F-measure was available within the first 4 h of the ED visit. Conclusions Automated appendicitis risk categorization based on EHR content, including information from clinical notes, shows comparable performance to physician chart reviewers as measured by their inter-annotator agreement and represents a promising new approach for computerized decision support to promote application of evidence-based medicine at the point of care. PMID:24130231

  15. Automated decision algorithm applied to a field experiment with multiple research objectives: The DC3 campaign

    NASA Astrophysics Data System (ADS)

    Hanlon, Christopher J.; Small, Arthur A.; Bose, Satyajit; Young, George S.; Verlinde, Johannes

    2014-10-01

    Automated decision systems have shown the potential to increase data yields from field experiments in atmospheric science. The present paper describes the construction and performance of a flight decision system designed for a case in which investigators pursued multiple, potentially competing objectives. The Deep Convective Clouds and Chemistry (DC3) campaign in 2012 sought in situ airborne measurements of isolated deep convection in three study regions: northeast Colorado, north Alabama, and a larger region extending from central Oklahoma through northwest Texas. As they confronted daily flight launch decisions, campaign investigators sought to achieve two mission objectives that stood in potential tension to each other: to maximize the total amount of data collected while also collecting approximately equal amounts of data from each of the three study regions. Creating an automated decision system involved understanding how investigators would themselves negotiate the trade-offs between these potentially competing goals, and representing those preferences formally using a utility function that served to rank-order the perceived value of alternative data portfolios. The decision system incorporated a custom-built method for generating probabilistic forecasts of isolated deep convection and estimated climatologies calibrated to historical observations. Monte Carlo simulations of alternative future conditions were used to generate flight decision recommendations dynamically consistent with the expected future progress of the campaign. Results show that a strict adherence to the recommendations generated by the automated system would have boosted the data yield of the campaign by between 10 and 57%, depending on the metrics used to score success, while improving portfolio balance.

  16. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  17. A Recursive Multiscale Correlation-Averaging Algorithm for an Automated Distributed Road Condition Monitoring System

    SciTech Connect

    Ndoye, Mandoye; Barker, Alan M; Krogmeier, James; Bullock, Darcy

    2011-01-01

    A signal processing approach is proposed to jointly filter and fuse spatially indexed measurements captured from many vehicles. It is assumed that these measurements are influenced by both sensor noise and measurement indexing uncertainties. Measurements from low-cost vehicle-mounted sensors (e.g., accelerometers and Global Positioning System (GPS) receivers) are properly combined to produce higher quality road roughness data for cost-effective road surface condition monitoring. The proposed algorithms are recursively implemented and thus require only moderate computational power and memory space. These algorithms are important for future road management systems, which will use on-road vehicles as a distributed network of sensing probes gathering spatially indexed measurements for condition monitoring, in addition to other applications, such as environmental sensing and/or traffic monitoring. Our method and the related signal processing algorithms have been successfully tested using field data.

  18. A robust linear regression based algorithm for automated evaluation of peptide identifications from shotgun proteomics by use of reversed-phase liquid chromatography retention time

    PubMed Central

    Xu, Hua; Yang, Lanhao; Freitas, Michael A

    2008-01-01

    Background Rejection of false positive peptide matches in database searches of shotgun proteomic experimental data is highly desirable. Several methods have been developed to use the peptide retention time as to refine and improve peptide identifications from database search algorithms. This report describes the implementation of an automated approach to reduce false positives and validate peptide matches. Results A robust linear regression based algorithm was developed to automate the evaluation of peptide identifications obtained from shotgun proteomic experiments. The algorithm scores peptides based on their predicted and observed reversed-phase liquid chromatography retention times. The robust algorithm does not require internal or external peptide standards to train or calibrate the linear regression model used for peptide retention time prediction. The algorithm is generic and can be incorporated into any database search program to perform automated evaluation of the candidate peptide matches based on their retention times. It provides a statistical score for each peptide match based on its retention time. Conclusion Analysis of peptide matches where the retention time score was included resulted in a significant reduction of false positive matches with little effect on the number of true positives. Overall higher sensitivities and specificities were achieved for database searches carried out with MassMatrix, Mascot and X!Tandem after implementation of the retention time based score algorithm. PMID:18713471

  19. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1987-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic as well as algorithmic computing needs. These needs could both be met using a general computing workstation running both symbolic and algorithmic code, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed to demonstrate the ability of an expert system to autonomously control the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. Integration options are explored and several possible solutions are presented.

  20. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm.

    PubMed

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively.

  1. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm

    PubMed Central

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively. PMID:26284175

  2. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    NASA Technical Reports Server (NTRS)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  3. Towards an intercomparison of automated registration algorithms for multiple source remote sensing data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Xia, Wei; Chettri, Samir; El-Ghazawi, Tarek; Kaymaz, Emre; Lerner, Bao-Ting; Mareboyana, Manohar; Netanyahu, Nathan; Pierce, John; Raghavan, Srini; Tilton, James C.; Campbell, William J.; Cromp, Robert F.

    1997-01-01

    The first step in the integration of multiple data is registration, either relative image-to-image registration or absolute geo-registration, to a map or a fixed coordinate system. As the need for automating registration techniques is recognized, we feel that there is a need to survey all the registration methods which may be applicable to Earth and space science problems and to evaluate their performances on a large variety of existing remote sensing data as well as on simulated data of soon-to-be-flown instruments. In this paper we will describe: 1) the operational toolbox which we are developing and which will consist in some of the most important registration techniques; and 2) the quantitative intercomparison of the different methods, which will allow a user to select the desired registration technique based on this evaluation and the visualization of the registration results.

  4. A thesis on the Development of an Automated SWIFT Edge Detection Algorithm

    SciTech Connect

    Trujillo, Christopher J.

    2016-07-28

    Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extraction can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.

  5. Automated analysis of Kokee-Wettzell Intensive VLBI sessions—algorithms, results, and recommendations

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2015-11-01

    The time-dependent variations in the rotation and orientation of the Earth are represented by a set of Earth Orientation Parameters (EOP). Currently, Very Long Baseline Interferometry (VLBI) is the only technique able to measure all EOP simultaneously and to provide direct observation of universal time, usually expressed as UT1-UTC. To produce estimates for UT1-UTC on a daily basis, 1-h VLBI experiments involving two or three stations are organised by the International VLBI Service for Geodesy and Astrometry (IVS), the IVS Intensive (INT) series. There is an ongoing effort to minimise the turn-around time for the INT sessions in order to achieve near real-time and high quality UT1-UTC estimates. As a step further towards true fully automated real-time analysis of UT1-UTC, we carry out an extensive investigation with INT sessions on the Kokee-Wettzell baseline. Our analysis starts with the first versions of the observational files in S- and X-band and includes an automatic group delay ambiguity resolution and ionospheric calibration. Several different analysis strategies are investigated. In particular, we focus on the impact of external information, such as meteorological and cable delay data provided in the station log-files, and a priori EOP information. The latter is studied by extensive Monte Carlo simulations. Our main findings are that it is easily possible to analyse the INT sessions in a fully automated mode to provide UT1-UTC with very low latency. The information found in the station log-files is important for the accuracy of the UT1-UTC results, provided that the data in the station log-files are reliable. Furthermore, to guarantee UT1-UTC with an accuracy of less than 20 μs, it is necessary to use predicted a priori polar motion data in the analysis that are not older than 12 h.

  6. Automated Algorithms to Identify Geostationary Satellites and Detect Mistagging using Concurrent Spatio-Temporal and Brightness Information

    NASA Astrophysics Data System (ADS)

    Dao, P.; Heinrich-Josties, E.; Boroson, T.

    2016-09-01

    Automated detection of changes of GEO satellites using photometry is fundamentally dependent on near real time association of non-resolved signatures and object identification. Non-statistical algorithms which rely on fixed positional boundaries for associating objects often results in mistags [1]. Photometry has been proposed to reduce the occurrence of mistags. In past attempts to include photometry, (1) the problem of correlation (with the catalog) has been decoupled from the photometry-based detection of change and mistagging and (2) positional information has not been considered simultaneously with photometry. The technique used in this study addresses both problems. It takes advantage of the fusion of both types of information and processes all information concurrently in a single statistics-based framework. This study demonstrates with Las Cumbres Observatory Global Telescope Network (LCOGT) data that metric information, i.e. right ascension, declination, photometry and GP element set, can be used concurrently to confidently associate (identify) GEO objects. All algorithms can easily be put into a framework to process data in near-real-time.

  7. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1988-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic and algorithmic needs. Both of these needs could be met using a general purpose workstation running both symbolic and algorithmic codes, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed by the NASA Ames Research Center in conjunction with the Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. The integration options and several possible solutions are presented.

  8. An automated diagnosis system of liver disease using artificial immune and genetic algorithms.

    PubMed

    Liang, Chunlin; Peng, Lingxi

    2013-04-01

    The rise of health care cost is one of the world's most important problems. Disease prediction is also a vibrant research area. Researchers have approached this problem using various techniques such as support vector machine, artificial neural network, etc. This study typically exploits the immune system's characteristics of learning and memory to solve the problem of liver disease diagnosis. The proposed system applies a combination of two methods of artificial immune and genetic algorithm to diagnose the liver disease. The system architecture is based on artificial immune system. The learning procedure of system adopts genetic algorithm to interfere the evolution of antibody population. The experiments use two benchmark datasets in our study, which are acquired from the famous UCI machine learning repository. The obtained diagnosis accuracies are very promising with regard to the other diagnosis system in the literatures. These results suggest that this system may be a useful automatic diagnosis tool for liver disease.

  9. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1987-01-01

    Traditional expert systems, such as diagnostic and training systems, interact with users only through a keyboard and screen, and are usually symbolic in nature. Expert systems that require access to data bases, complex simulations and real-time instrumentation have both symbolic as well as algorithmic computing needs. These needs could both be met using a general purpose workstation running both symbolic and algorithmic code, or separate, specialized computers networked together. The latter approach was chosen to implement TEXSYS, the thermal expert system, developed by NASA Ames Research Center in conjunction with Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. This paper will explore the integration options, and present several possible solutions.

  10. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  11. On the Automated Segmentation of Epicardial and Mediastinal Cardiac Adipose Tissues Using Classification Algorithms.

    PubMed

    Rodrigues, Érick Oliveira; Cordeiro de Morais, Felipe Fernandes; Conci, Aura

    2015-01-01

    The quantification of fat depots on the surroundings of the heart is an accurate procedure for evaluating health risk factors correlated with several diseases. However, this type of evaluation is not widely employed in clinical practice due to the required human workload. This work proposes a novel technique for the automatic segmentation of cardiac fat pads. The technique is based on applying classification algorithms to the segmentation of cardiac CT images. Furthermore, we extensively evaluate the performance of several algorithms on this task and discuss which provided better predictive models. Experimental results have shown that the mean accuracy for the classification of epicardial and mediastinal fats has been 98.4% with a mean true positive rate of 96.2%. On average, the Dice similarity index, regarding the segmented patients and the ground truth, was equal to 96.8%. Therfore, our technique has achieved the most accurate results for the automatic segmentation of cardiac fats, to date.

  12. MECH: Algorithms and Tools for Automated Assessment of Potential Attack Locations

    DTIC Science & Technology

    2015-10-06

    context of the two modeling approaches. It shows the feasibility of self-guided situational analysis informed by MECH-based situational awareness...the MECH model. The purpose of this study is to design an accurate and robust classification algorithm that learns from available data under...minimal feasible geographic constraint. In the following figures, the size of the event class is constrained to a set of fixed values: n = [15 30 45 60

  13. Automated Conflict Resolution For Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    2005-01-01

    The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.

  14. The automated reference toolset: A soil-geomorphic ecological potential matching algorithm

    USGS Publications Warehouse

    Nauman, Travis; Duniway, Michael C.

    2016-01-01

    Ecological inventory and monitoring data need referential context for interpretation. Identification of appropriate reference areas of similar ecological potential for site comparison is demonstrated using a newly developed automated reference toolset (ART). Foundational to identification of reference areas was a soil map of particle size in the control section (PSCS), a theme in US Soil Taxonomy. A 30-m resolution PSCS map of the Colorado Plateau (366,000 km2) was created by interpolating ∼5000 field soil observations using a random forest model and a suite of raster environmental spatial layers representing topography, climate, general ecological community, and satellite imagery ratios. The PSCS map had overall out of bag accuracy of 61.8% (Kappa of 0.54, p < 0.0001), and an independent validation accuracy of 93.2% at a set of 356 field plots along the southern edge of Canyonlands National Park, Utah. The ART process was also tested at these plots, and matched plots with the same ecological sites (ESs) 67% of the time where sites fell within 2-km buffers of each other. These results show that the PSCS and ART have strong application for ecological monitoring and sampling design, as well as assessing impacts of disturbance and land management action using an ecological potential framework. Results also demonstrate that PSCS could be a key mapping layer for the USDA-NRCS provisional ES development initiative.

  15. SU-E-T-427: Feasibility Study for Evaluation of IMRT Dose Distribution Using Geant4-Based Automated Algorithms

    SciTech Connect

    Choi, H; Shin, W; Testa, M; Min, C; Kim, J

    2015-06-15

    Purpose: For intensity-modulated radiation therapy (IMRT) treatment planning validation using Monte Carlo (MC) simulations, a precise and automated procedure is necessary to evaluate the patient dose distribution. The aim of this study is to develop an automated algorithm for IMRT simulations using DICOM files and to evaluate the patient dose based on 4D simulation using the Geant4 MC toolkit. Methods: The head of a clinical linac (Varian Clinac 2300 IX) was modeled in Geant4 along with particular components such as the flattening filter and the multi-leaf collimator (MLC). Patient information and the position of the MLC were imported from the DICOM-RT interface. For each position of the MLC, a step- and-shoot technique was adopted. PDDs and lateral profiles were simulated in a water phantom (50×50×40 cm{sup 3}) and compared to measurement data. We used a lung phantom and MC-dose calculations were compared to the clinical treatment planning used at the Seoul National University Hospital. Results: In order to reproduce the measurement data, we tuned three free parameters: mean and standard deviation of the primary electron beam energy and the beam spot size. These parameters for 6 MV were found to be 5.6 MeV, 0.2378 MeV and 1 mm FWHM respectively. The average dose difference between measurements and simulations was less than 2% for PDDs and radial profiles. The lung phantom study showed fairly good agreement between MC and planning dose despite some unavoidable statistical fluctuation. Conclusion: The current feasibility study using the lung phantom shows the potential for IMRT dose validation using 4D MC simulations using Geant4 tool kits. This research was supported by Korea Institute of Nuclear safety and Development of Measurement Standards for Medical Radiation funded by Korea research Institute of Standards and Science. (KRISS-2015-15011032)

  16. Image processing algorithms for automated analysis of GMR data from inspection of multilayer structures

    NASA Astrophysics Data System (ADS)

    Karpenko, Oleksii; Safdernejad, Seyed; Dib, Gerges; Udpa, Lalita; Udpa, Satish; Tamburrino, Antonello

    2015-03-01

    Eddy current probes (EC) with Giant Magnetoresistive (GMR) sensors have recently emerged as a promising tool for rapid scanning of multilayer aircraft panels that helps detect cracks under fastener heads. However, analysis of GMR data is challenging due to the complexity of sensed magnetic fields. Further, probes that induce unidirectional currents are insensitive to cracks parallel to the current flow. In this paper, signal processing algorithms are developed for mixing data from two orthogonal EC-GMR scans in order to generate pseudo-rotating electromagnetic field images of fasteners with bottom layer cracks. Finite element simulations demonstrate that the normal component of numerically computed rotating field has uniform sensitivity to cracks emanating in all radial directions. The concept of pseudo-rotating field imaging is experimentally validated with the help of MAUS bilateral GMR array (Big-MR) designed by Boeing.

  17. Automated identification of depsipeptide natural products by an informatic search algorithm.

    PubMed

    Skinnider, Michael A; Johnston, Chad W; Zvanych, Rostyslav; Magarvey, Nathan A

    2015-01-19

    Nonribosomal depsipeptides are a class of potent microbial natural products, which include several clinically approved pharmaceutical agents. Genome sequencing has revealed a large number of uninvestigated natural-product biosynthetic gene clusters. However, while novel informatic search methods to access these gene clusters have been developed to identify peptide natural products, depsipeptide detection has proven challenging. Herein, we present an improved version of our informatic search algorithm for natural products (iSNAP), which facilitates the detection of known and genetically predicted depsipeptides in complex microbial culture extracts. We validated this technology by identifying several depsipeptides from novel producers, and located a large number of novel depsipeptide gene clusters for future study. This approach highlights the value of chemoinformatic search methods for the discovery of genetically encoded metabolites by targeting specific areas of chemical space.

  18. Recognition of pharmaceuticals with compact mini-Raman-spectrometer and automized pattern recognition algorithms

    NASA Astrophysics Data System (ADS)

    Jähme, Hendrik; Di Florio, Giuseppe; Conti Nibali, Valeria; Esen, Cemal; Ostendorf, Andreas; Grafen, Markus; Henke, Erich; Soetebier, Jens; Brenner, Carsten; Havenith, Martina; Hofmann, Martin R.

    2016-04-01

    Robust classification of pharmaceuticals in an industrial process is an important step for validation of the final product. Especially for pharmaceuticals with similar visual appearance a quality control is only possible if a reliable algorithm based on easily obtainable spectroscopic data is available. We used Principal Component Analysis (PCA) and Support Vector Machines (SVM) on Raman spectroscopy data from a compact Raman system to classify several look-alike pharmaceuticals. This paper describes the data gathering and analysis process to robustly discriminate 19 different pharmaceuticals with similar visual appearance. With the described process we successfully identified all given pharmaceuticals which had a significant amount of active ingredients. Thus automatic validation of these pharmaceuticals in a process can be used to prevent wrong administration of look-alike drugs in an industrial setting, e.g. patient individual blistering.

  19. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  20. An algorithmic scheme for the automated calculation of fiber orientations in arterial walls

    NASA Astrophysics Data System (ADS)

    Fausten, Simon; Balzani, Daniel; Schröder, Jörg

    2016-11-01

    We propose an algorithmic scheme for the numerical calculation of fiber orientations in arterial walls. The basic assumption behind the procedure is that the fiber orientations are mainly governed by the principal tensile stress directions resulting in an improved load transfer within the artery as a consequence of the redistribution of stresses. This reflects the biological motivation that soft tissues continuously adapt to their mechanical environment in order to optimize their load-bearing capacities. The algorithmic scheme proposed here enhances efficiency of the general procedure given in Hariton et al. (Biomech Model Mechanobiol 6(3):163-175, 2007), which consists of repeatedly identifying a favored fiber orientation based on the principal tensile stresses under a certain loading scenario, and then re-calculating the stresses for that loading scenario with the modified favored fiber orientation. Since the method still depends on a highly accurate stress approximation of the finite element formulation, which is not straightforward to obtain in particular for incompressible and highly anisotropic materials, furthermore, a modified model is introduced. This model defines the favored fiber orientation not only in terms of the local principal stresses, but in terms of the volume averages of the principal stresses computed over individual finite elements. Thereby, the influence of imperfect stress approximations can be weakened leading to a stabilized convergence of the reorientation procedure and a more reasonable fiber orientation with less numerical noise. The performance of the proposed fiber reorientation scheme is investigated with respect to different finite element formulations and different favored fiber orientation models, Hariton et al. (Biomech Model Mechanobiol 6(3):163-175, 2007) and Cyron and Humphrey (Math Mech Solids 1-17, 2014). In addition, it is applied to calculate the fiber orientation in a patient-specific arterial geometry.

  1. A Computer-Based Automated Algorithm for Assessing Acinar Cell Loss after Experimental Pancreatitis

    PubMed Central

    Eisses, John F.; Davis, Amy W.; Tosun, Akif Burak; Dionise, Zachary R.; Chen, Cheng; Ozolek, John A.; Rohde, Gustavo K.; Husain, Sohail Z.

    2014-01-01

    The change in exocrine mass is an important parameter to follow in experimental models of pancreatic injury and regeneration. However, at present, the quantitative assessment of exocrine content by histology is tedious and operator-dependent, requiring manual assessment of acinar area on serial pancreatic sections. In this study, we utilized a novel computer-generated learning algorithm to construct an accurate and rapid method of quantifying acinar content. The algorithm works by learning differences in pixel characteristics from input examples provided by human experts. HE-stained pancreatic sections were obtained in mice recovering from a 2-day, hourly caerulein hyperstimulation model of experimental pancreatitis. For training data, a pathologist carefully outlined discrete regions of acinar and non-acinar tissue in 21 sections at various stages of pancreatic injury and recovery (termed the “ground truth”). After the expert defined the ground truth, the computer was able to develop a prediction rule that was then applied to a unique set of high-resolution images in order to validate the process. For baseline, non-injured pancreatic sections, the software demonstrated close agreement with the ground truth in identifying baseline acinar tissue area with only a difference of 1%±0.05% (p = 0.21). Within regions of injured tissue, the software reported a difference of 2.5%±0.04% in acinar area compared with the pathologist (p = 0.47). Surprisingly, on detailed morphological examination, the discrepancy was primarily because the software outlined acini and excluded inter-acinar and luminal white space with greater precision. The findings suggest that the software will be of great potential benefit to both clinicians and researchers in quantifying pancreatic acinar cell flux in the injured and recovering pancreas. PMID:25343460

  2. Security system signal supervision

    SciTech Connect

    Chritton, M.R. ); Matter, J.C. )

    1991-09-01

    This purpose of this NUREG is to present technical information that should be useful to NRC licensees for understanding and applying line supervision techniques to security communication links. A review of security communication links is followed by detailed discussions of link physical protection and DC/AC static supervision and dynamic supervision techniques. Material is also presented on security for atmospheric transmission and video line supervision. A glossary of security communication line supervision terms is appended. 16 figs.

  3. 3D position of radiation sources using an automated gamma camera and ML algorithm with energy-dependent response functions

    NASA Astrophysics Data System (ADS)

    Lee, Wonho; Wehe, David

    2004-09-01

    Portable γ-ray imaging systems operating from 100keV to 3MeV are used in nuclear medicine, astrophysics and industrial applications. 2D images of γ-rays are common in many fields using radiation-detection systems (Appl. Opt. 17 (3) (1978) 337; IEEE Trans. Nucl. Sci. Ns- 31 (1984) 771; IEEE Trans. Nucl. Sci. NS- 44 (3) (1997) 911). In this work, the 3D position of a radiation source is determined by a portable gamma-ray imaging system. 2D gamma-ray images were obtained from different positions of the gamma camera and the third dimension, the distance between the detector and the radiation source, was calculated using triangulation. The imaging system consists of a 4×4 array of CsI(Tl) detectors coupled to photodiode detectors that are mounted on an automated table which can precisely position the angular axis of the camera. Lead shields the detector array from the background radiation. Additionally, a CCD camera is attached to the top of the gamma camera and provides coincident 2D visual information. The inferred distances from the center of the two measurement points and a radiation source had less than a 3% error within a range of 3m. The radiation image from the gamma camera and the visual image from CCD camera are superimposed into one combined image using a maximum-likelihood (ML) algorithm to make the image more precise. The response functions for the ML algorithm depend on the energy of incident radiation, and are obtained from both experiments and simulations. The energy-dependent response functions are shown to yield better imaging performance compared with the fixed energy response function commonly used previously.

  4. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    NASA Astrophysics Data System (ADS)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  5. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging.

    PubMed

    Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F

    2014-09-01

    Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events.

  6. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    PubMed

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis.

  7. SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET

    SciTech Connect

    Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu; Pradier, Olivier; Cheze Le Rest, Catherine

    2015-10-15

    Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.

  8. Automated Means of Identifying Landslide Deposits using LiDAR Data using the Contour Connection Method Algorithm

    NASA Astrophysics Data System (ADS)

    Olsen, M. J.; Leshchinsky, B. A.; Tanyu, B. F.

    2014-12-01

    Landslides are a global natural hazard, resulting in severe economic, environmental and social impacts every year. Often, landslides occur in areas of repeated slope instability, but despite these trends, significant residential developments and critical infrastructure are built in the shadow of past landslide deposits and marginally stable slopes. These hazards, despite their sometimes enormous scale and regional propensity, however, are difficult to detect on the ground, often due to vegetative cover. However, new developments in remote sensing technology, specifically Light Detection and Ranging mapping (LiDAR) are providing a new means of viewing our landscape. Airborne LiDAR, combined with a level of post-processing, enable the creation of spatial data representative of the earth beneath the vegetation, highlighting the scars of unstable slopes of the past. This tool presents a revolutionary technique to mapping landslide deposits and their associated regions of risk; yet, their inventorying is often done manually, an approach that can be tedious, time-consuming and subjective. However, the associated LiDAR bare earth data present the opportunity to use this remote sensing technology and typical landslide geometry to create an automated algorithm that can detect and inventory deposits on a landscape scale. This algorithm, called the Contour Connection Method (CCM), functions by first detecting steep gradients, often associated with the headscarp of a failed hillslope, and initiating a search, highlighting deposits downslope of the failure. Based on input of search gradients, CCM can assist in highlighting regions identified as landslides consistently on a landscape scale, capable of mapping more than 14,000 hectares rapidly (<30 minutes). CCM has shown preliminary agreement with manual landslide inventorying in Oregon's Coast Range, realizing almost 90% agreement with inventorying performed by a trained geologist. The global threat of landslides necessitates

  9. Analysis of new bone, cartilage, and fibrosis tissue in healing murine allografts using whole slide imaging and a new automated histomorphometric algorithm

    PubMed Central

    Zhang, Longze; Chang, Martin; Beck, Christopher A; Schwarz, Edward M; Boyce, Brendan F

    2016-01-01

    Histomorphometric analysis of histologic sections of normal and diseased bone samples, such as healing allografts and fractures, is widely used in bone research. However, the utility of traditional semi-automated methods is limited because they are labor-intensive and can have high interobserver variability depending upon the parameters being assessed, and primary data cannot be re-analyzed automatically. Automated histomorphometry has long been recognized as a solution for these issues, and recently has become more feasible with the development of digital whole slide imaging and computerized image analysis systems that can interact with digital slides. Here, we describe the development and validation of an automated application (algorithm) using Visiopharm’s image analysis system to quantify newly formed bone, cartilage, and fibrous tissue in healing murine femoral allografts in high-quality digital images of H&E/alcian blue-stained decalcified histologic sections. To validate this algorithm, we compared the results obtained independently using OsteoMeasureTM and Visiopharm image analysis systems. The intraclass correlation coefficient between Visiopharm and OsteoMeasure was very close to one for all tissue elements tested, indicating nearly perfect reproducibility across methods. This new algorithm represents an accurate and labor-efficient method to quantify bone, cartilage, and fibrous tissue in healing mouse allografts. PMID:26816658

  10. Differences between the CME fronts tracked by an expert, an automated algorithm, and the Solar Stormwatch project

    NASA Astrophysics Data System (ADS)

    Barnard, L.; Scott, C. J.; Owens, M.; Lockwood, M.; Crothers, S. R.; Davies, J. A.; Harrison, R. A.

    2015-10-01

    Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.

  11. A Supervision of Solidarity

    ERIC Educational Resources Information Center

    Reynolds, Vikki

    2010-01-01

    This article illustrates an approach to therapeutic supervision informed by a philosophy of solidarity and social justice activism. Called a "Supervision of Solidarity", this approach addresses the particular challenges in the supervision of therapists who work alongside clients who are subjected to social injustice and extreme marginalization. It…

  12. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  13. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data.

    PubMed

    Ekberg, Peter; Su, Rong; Chang, Ernest W; Yun, Seok Hyun; Mattsson, Lars

    2014-02-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 μm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness.

  14. Automated clustering of probe molecules from solvent mapping of protein surfaces: new algorithms applied to hot-spot mapping and structure-based drug design

    NASA Astrophysics Data System (ADS)

    Lerner, Michael G.; Meagher, Kristin L.; Carlson, Heather A.

    2008-10-01

    Use of solvent mapping, based on multiple-copy minimization (MCM) techniques, is common in structure-based drug discovery. The minima of small-molecule probes define locations for complementary interactions within a binding pocket. Here, we present improved methods for MCM. In particular, a Jarvis-Patrick (JP) method is outlined for grouping the final locations of minimized probes into physical clusters. This algorithm has been tested through a study of protein-protein interfaces, showing the process to be robust, deterministic, and fast in the mapping of protein "hot spots." Improvements in the initial placement of probe molecules are also described. A final application to HIV-1 protease shows how our automated technique can be used to partition data too complicated to analyze by hand. These new automated methods may be easily and quickly extended to other protein systems, and our clustering methodology may be readily incorporated into other clustering packages.

  15. A New List of Flux Transfer Events in the CLUSTER Data by Use of an Automated Technique

    NASA Astrophysics Data System (ADS)

    Sipes, T.; Karimabadi, H.; Wang, Y.; Lavraud, B.

    2007-12-01

    We have used our newly developed data mining software called MineTool for automated detection of flux transfer events (FTEs) in the CLUSTER data. Data mining techniques can be divided into two types, supervised and unsupervised. In supervised algorithms like MineTool, one teaches the algorithm using examples from labeled data. Considering the case of FTEs, the user would provide examples of FTEs as well as examples of non-FTEs and label (as FTE or non-FTE) the data. We used a list of FTEs compiled by Y. Wang to create the labeled data. We then used MineTool on this data set to develop an automated detection model for FTEs. Finally we applied this model to CLUSTER data to search for new FTEs. We have compiled a list of new FTEs which are made publicly available.

  16. Detection of facilities in satellite imagery using semi-supervised image classification and auxiliary contextual observables

    SciTech Connect

    Harvey, Neal R; Ruggiero, Christy E; Pawley, Norma H; Brumby, Steven P; Macdonald, Brian; Balick, Lee; Oyer, Alden

    2009-01-01

    Detecting complex targets, such as facilities, in commercially available satellite imagery is a difficult problem that human analysts try to solve by applying world knowledge. Often there are known observables that can be extracted by pixel-level feature detectors that can assist in the facility detection process. Individually, each of these observables is not sufficient for an accurate and reliable detection, but in combination, these auxiliary observables may provide sufficient context for detection by a machine learning algorithm. We describe an approach for automatic detection of facilities that uses an automated feature extraction algorithm to extract auxiliary observables, and a semi-supervised assisted target recognition algorithm to then identify facilities of interest. We illustrate the approach using an example of finding schools in Quickbird image data of Albuquerque, New Mexico. We use Los Alamos National Laboratory's Genie Pro automated feature extraction algorithm to find a set of auxiliary features that should be useful in the search for schools, such as parking lots, large buildings, sports fields and residential areas and then combine these features using Genie Pro's assisted target recognition algorithm to learn a classifier that finds schools in the image data.

  17. Nonalcoholic Fatty Liver Disease (NAFLD) in the Veterans Administration Population: Development and Validation of an Algorithm for NAFLD using Automated Data

    PubMed Central

    Husain, Nisreen; Blais, Peter; Kramer, Jennifer; Kowalkowski, Marc; Richardson, Peter; El-Serag, Hashem B.; Kanwal, Fasiha

    2017-01-01

    Background In practice, non-alcoholic fatty liver (NAFLD) is diagnosed based on elevated liver enzymes and confirmatory liver biopsy or abdominal imaging. Neither method is feasible in identifying individuals with NAFLD in a large-scale healthcare system. Aim To develop and validate an algorithm to identify patients with NAFLD using automated data. Methods Using the Veterans Administration Corporate Data Warehouse, we identified patients who had persistent ALT elevation (≥2 values ≥40IU/ml ≥6 months apart) and did not have evidence of hepatitis B, hepatitis C, or excessive alcohol use. We conducted a structured chart review of 450 patients classified as NAFLD and 150 patients who were classified as non-NAFLD by the database algorithm, and subsequently refined the database algorithm. Results The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) for the initial database definition of NAFLD were 78.4% (95%CI=70.0-86.8%), 74.5% (95%CI=68.1-80.9%), 64.1% (95%CI: 56.4-71.7%), and 85.6% (95%Ci: 79.4-91.8%), respectively. Reclassifying patients as having NAFLD if they had 2 elevated ALTs that were at-least 6 months apart but within 2 years of each other, increased the specificity and PPV of the algorithm to 92.4% (95%CI=88.8 - 96.0%) and 80.8% (95%CI=72.5 - 89.0%), respectively. However, the sensitivity and NPV decreased to 55.0% (95%CI=46.1 - 63.9%) and 78.0% (95%CI=72.1 - 83.8%), respectively. Conclusions Predictive algorithms using automated data can be used to identify patients with NAFLD, determine prevalence of NAFLD at the system-wide level, and may help select a target population for future clinical studies in veterans with NAFLD. PMID:25155259

  18. Validation of the Total Visual Acuity Extraction Algorithm (TOVA) for Automated Extraction of Visual Acuity Data From Free Text, Unstructured Clinical Records

    PubMed Central

    Baughman, Douglas M.; Su, Grace L.; Tsui, Irena; Lee, Cecilia S.; Lee, Aaron Y.

    2017-01-01

    Purpose With increasing volumes of electronic health record data, algorithm-driven extraction may aid manual extraction. Visual acuity often is extracted manually in vision research. The total visual acuity extraction algorithm (TOVA) is presented and validated for automated extraction of visual acuity from free text, unstructured clinical notes. Methods Consecutive inpatient ophthalmology notes over an 8-year period from the University of Washington healthcare system in Seattle, WA were used for validation of TOVA. The total visual acuity extraction algorithm applied natural language processing to recognize Snellen visual acuity in free text notes and assign laterality. The best corrected measurement was determined for each eye and converted to logMAR. The algorithm was validated against manual extraction of a subset of notes. Results A total of 6266 clinical records were obtained giving 12,452 data points. In a subset of 644 validated notes, comparison of manually extracted data versus TOVA output showed 95% concordance. Interrater reliability testing gave κ statistics of 0.94 (95% confidence interval [CI], 0.89–0.99), 0.96 (95% CI, 0.94–0.98), 0.95 (95% CI, 0.92–0.98), and 0.94 (95% CI, 0.90–0.98) for acuity numerators, denominators, adjustments, and signs, respectively. Pearson correlation coefficient was 0.983. Linear regression showed an R2 of 0.966 (P < 0.0001). Conclusions The total visual acuity extraction algorithm is a novel tool for extraction of visual acuity from free text, unstructured clinical notes and provides an open source method of data extraction. Translational Relevance Automated visual acuity extraction through natural language processing can be a valuable tool for data extraction from free text ophthalmology notes. PMID:28299240

  19. Repeatability and Reproducibility of Eight Macular Intra-Retinal Layer Thicknesses Determined by an Automated Segmentation Algorithm Using Two SD-OCT Instruments

    PubMed Central

    Huang, Shenghai; Leng, Lin; Zhu, Dexi; Lu, Fan

    2014-01-01

    Purpose To evaluate the repeatability, reproducibility, and agreement of thickness profile measurements of eight intra-retinal layers determined by an automated algorithm applied to optical coherence tomography (OCT) images from two different instruments. Methods Twenty normal subjects (12 males, 8 females; 24 to 32 years old) were enrolled. Imaging was performed with a custom built ultra-high resolution OCT instrument (UHR-OCT, ∼3 µm resolution) and a commercial RTVue100 OCT (∼5 µm resolution) instrument. An automated algorithm was developed to segment the macular retina into eight layers and quantitate the thickness of each layer. The right eye of each subject was imaged two times by the first examiner using each instrument to assess intra-observer repeatability and once by the second examiner to assess inter-observer reproducibility. The intraclass correlation coefficient (ICC) and coefficients of repeatability and reproducibility (COR) were analyzed to evaluate the reliability. Results The ICCs for the intra-observer repeatability and inter-observer reproducibility of both SD-OCT instruments were greater than 0.945 for the total retina and all intra-retinal layers, except the photoreceptor inner segments, which ranged from 0.051 to 0.643, and the outer segments, which ranged from 0.709 to 0.959. The CORs were less than 6.73% for the total retina and all intra-retinal layers. The total retinal thickness measured by the UHR-OCT was significantly thinner than that measured by the RTVue100. However, the ICC for agreement of the thickness profiles between UHR-OCT and RTVue OCT were greater than 0.80 except for the inner segment and outer segment layers. Conclusions Thickness measurements of the intra-retinal layers determined by the automated algorithm are reliable when applied to images acquired by the UHR-OCT and RTVue100 instruments. PMID:24505345

  20. Automated Method of Frequency Determination in Software Metric Data Through the Use of the Multiple Signal Classification (MUSIC) Algorithm

    DTIC Science & Technology

    1998-06-26

    METHOD OF FREQUENCY DETERMINATION 4 IN SOFTWARE METRIC DATA THROUGH THE USE OF THE 5 MULTIPLE SIGNAL CLASSIFICATION ( MUSIC ) ALGORITHM 6 7 STATEMENT OF...graph showing the estimated power spectral 12 density (PSD) generated by the multiple signal classification 13 ( MUSIC ) algorithm from the data set used...implemented in this module; however, it is preferred to use 1 the Multiple Signal Classification ( MUSIC ) algorithm. The MUSIC 2 algorithm is

  1. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  2. Theme: Supervised Experience.

    ERIC Educational Resources Information Center

    Cox, David E.; And Others

    1991-01-01

    Includes "It's Time to Stop Quibbling over the Acronym" (Cox); "Information Rich--Experience Poor" (Elliot et al.); "Supervised Agricultural Experience Selection Process" (Yokum, Boggs); "Point System" (Fraze, Vaughn); "Urban Diversity Rural Style" (Morgan, Henry); "Nonoccupational Supervised Experience" (Croom); "Reflecting Industry" (Miller);…

  3. Networks of Professional Supervision

    ERIC Educational Resources Information Center

    Annan, Jean; Ryba, Ken

    2013-01-01

    An ecological analysis of the supervisory activity of 31 New Zealand school psychologists examined simultaneously the theories of school psychology, supervision practices, and the contextual qualities that mediated participants' supervisory actions. The findings indicated that the school psychologists worked to achieve the supervision goals of…

  4. Butterflies, Bugs and Supervising Teachers.

    ERIC Educational Resources Information Center

    Morris, John E.; And Others

    1979-01-01

    Presented is an effective, nonthreatening way to provide feedback to supervising teachers. It involves an exercise called "Butterflies (ways supervising teachers helped) and Bugs (behaviors of supervising teachers which were detrimental or unprofessional)." (KC)

  5. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    ERIC Educational Resources Information Center

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  6. An Automated Algorithm for Measurement of Surgical Tip Excursion in Ultrasonic Vibration Using the Spatial 2-Dimensional Fourier Transform in an Optical Image

    NASA Astrophysics Data System (ADS)

    Manandhar, Prakash; Ward, Andrew; Allen, Patrick; Cotter, Daniel J.

    The International Electrotechnical Commission (IEC) has defined a standard IEC 61847 (First Edition, 1998) for characterization of ultrasonic surgical systems. This standard prescribes several methods for measurement of primary tip vibration excursion. The first method described in the standard uses an optical microscope and relies on the motion blur of a vibrating object as it is imaged at low frame rates (e.g. 30 Hz) of conventional video equipment. This is a widely used method, that predates the standard, in ultrasonic surgical instrument design, and it is one of the key parameters that surgeons who use these devices are aware of. It is relatively easily measured using a microscope system. Although this method is widespread, the accuracy of this method is highly dependent on multiple factors such as operator training, microscope lighting and modulation of surgical tip motion. It is also a manual and time consuming measurement such that a continuous measurement that describes dynamics at the scale of micro-seconds becomes impossible. Here we describe an algorithm to automate this measurement so that it can be done at high speed without operator training, reducing human error and operator variation. The algorithm derives from techniques used in motion blur estimation and reduction in the image processing literature. A 2 dimensional spatial Fourier transform is computed from the microscope image of an ultrasonically vibrating tip. A peak detection algorithm is used along with pre-processing to reduce noise. Separation of peaks in the Fourier domain is used to estimate tip excursion. We present data that shows an error of about 1% between manual and automated methods, when measurements are in the range of 300 microns and about 20% when the measurements are in the range of 30 microns.

  7. An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters

    PubMed Central

    Rempe, Michael J; Clegern, William C; Wisor, Jonathan P

    2015-01-01

    Introduction Rodent sleep research uses electroencephalography (EEG) and electromyography (EMG) to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2–10 seconds), and each epoch is scored as wake, slow-wave sleep (SWS), or rapid-eye-movement sleep (REMS), on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations. Methods We developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration. Results More than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs) to 28% (2-second epochs) were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring method. Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state

  8. A multi-stage heuristic algorithm for matching problem in the modified miniload automated storage and retrieval system of e-commerce

    NASA Astrophysics Data System (ADS)

    Wang, Wenrui; Wu, Yaohua; Wu, Yingying

    2016-05-01

    E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.

  9. A practical tool for public health surveillance: Semi-automated coding of short injury narratives from large administrative databases using Naïve Bayes algorithms.

    PubMed

    Marucci-Wellman, Helen R; Lehto, Mark R; Corns, Helen L

    2015-11-01

    Public health surveillance programs in the U.S. are undergoing landmark changes with the availability of electronic health records and advancements in information technology. Injury narratives gathered from hospital records, workers compensation claims or national surveys can be very useful for identifying antecedents to injury or emerging risks. However, classifying narratives manually can become prohibitive for large datasets. The purpose of this study was to develop a human-machine system that could be relatively easily tailored to routinely and accurately classify injury narratives from large administrative databases such as workers compensation. We used a semi-automated approach based on two Naïve Bayesian algorithms to classify 15,000 workers compensation narratives into two-digit Bureau of Labor Statistics (BLS) event (leading to injury) codes. Narratives were filtered out for manual review if the algorithms disagreed or made weak predictions. This approach resulted in an overall accuracy of 87%, with consistently high positive predictive values across all two-digit BLS event categories including the very small categories (e.g., exposure to noise, needle sticks). The Naïve Bayes algorithms were able to identify and accurately machine code most narratives leaving only 32% (4853) for manual review. This strategy substantially reduces the need for resources compared with manual review alone.

  10. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  11. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  12. Optimal installation locations for automated external defibrillators in Taipei 7-Eleven stores: using GIS and a genetic algorithm with a new stirring operator.

    PubMed

    Huang, Chung-Yuan; Wen, Tzai-Hung

    2014-01-01

    Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.

  13. Two Approaches to Clinical Supervision.

    ERIC Educational Resources Information Center

    Anderson, Eugene M.

    Criteria are established for a definition of "clinical supervision" and the effectiveness of such supervisory programs in a student teaching context are considered. Two differing genres of clinical supervision are constructed: "supervision by pattern analysis" is contrasted with "supervision by performance objectives." An outline of procedural…

  14. An efficient semi-supervised classification approach for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Tan, Kun; Li, Erzhu; Du, Qian; Du, Peijun

    2014-11-01

    In this paper, an efficient semi-supervised support vector machine (SVM) with segmentation-based ensemble (S2SVMSE) algorithm is proposed for hyperspectral image classification. The algorithm utilizes spatial information extracted by a segmentation algorithm for unlabeled sample selection. The unlabeled samples that are the most similar to the labeled ones are found and the candidate set of unlabeled samples to be chosen is enlarged to the corresponding image segments. To ensure the finally selected unlabeled samples be spatially widely distributed and less correlated, random selection is conducted with the flexibility of the number of unlabeled samples actually participating in semi-supervised learning. Classification is also refined through a spectral-spatial feature ensemble technique. The proposed method with very limited labeled training samples is evaluated via experiments with two real hyperspectral images, where it outperforms the fully supervised SVM and the semi-supervised version without spectral-spatial ensemble.

  15. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  16. MDL constrained 3-D grayscale skeletonization algorithm for automated extraction of dendrites and spines from fluorescence confocal images.

    PubMed

    Yuan, Xiaosong; Trachtenberg, Joshua T; Potter, Steve M; Roysam, Badrinath

    2009-12-01

    This paper presents a method for improved automatic delineation of dendrites and spines from three-dimensional (3-D) images of neurons acquired by confocal or multi-photon fluorescence microscopy. The core advance presented here is a direct grayscale skeletonization algorithm that is constrained by a structural complexity penalty using the minimum description length (MDL) principle, and additional neuroanatomy-specific constraints. The 3-D skeleton is extracted directly from the grayscale image data, avoiding errors introduced by image binarization. The MDL method achieves a practical tradeoff between the complexity of the skeleton and its coverage of the fluorescence signal. Additional advances include the use of 3-D spline smoothing of dendrites to improve spine detection, and graph-theoretic algorithms to explore and extract the dendritic structure from the grayscale skeleton using an intensity-weighted minimum spanning tree (IW-MST) algorithm. This algorithm was evaluated on 30 datasets organized in 8 groups from multiple laboratories. Spines were detected with false negative rates less than 10% on most datasets (the average is 7.1%), and the average false positive rate was 11.8%. The software is available in open source form.

  17. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  18. SWIFT-scalable clustering for automated identification of rare cell populations in large, high-dimensional flow cytometry datasets, part 1: algorithm design.

    PubMed

    Naim, Iftekhar; Datta, Suprakash; Rebhahn, Jonathan; Cavenaugh, James S; Mosmann, Tim R; Sharma, Gaurav

    2014-05-01

    We present a model-based clustering method, SWIFT (Scalable Weighted Iterative Flow-clustering Technique), for digesting high-dimensional large-sized datasets obtained via modern flow cytometry into more compact representations that are well-suited for further automated or manual analysis. Key attributes of the method include the following: (a) the analysis is conducted in the multidimensional space retaining the semantics of the data, (b) an iterative weighted sampling procedure is utilized to maintain modest computational complexity and to retain discrimination of extremely small subpopulations (hundreds of cells from datasets containing tens of millions), and (c) a splitting and merging procedure is incorporated in the algorithm to preserve distinguishability between biologically distinct populations, while still providing a significant compaction relative to the original data. This article presents a detailed algorithmic description of SWIFT, outlining the application-driven motivations for the different design choices, a discussion of computational complexity of the different steps, and results obtained with SWIFT for synthetic data and relatively simple experimental data that allow validation of the desirable attributes. A companion paper (Part 2) highlights the use of SWIFT, in combination with additional computational tools, for more challenging biological problems.

  19. SWIFT—Scalable Clustering for Automated Identification of Rare Cell Populations in Large, High-Dimensional Flow Cytometry Datasets, Part 1: Algorithm Design

    PubMed Central

    Naim, Iftekhar; Datta, Suprakash; Rebhahn, Jonathan; Cavenaugh, James S; Mosmann, Tim R; Sharma, Gaurav

    2014-01-01

    We present a model-based clustering method, SWIFT (Scalable Weighted Iterative Flow-clustering Technique), for digesting high-dimensional large-sized datasets obtained via modern flow cytometry into more compact representations that are well-suited for further automated or manual analysis. Key attributes of the method include the following: (a) the analysis is conducted in the multidimensional space retaining the semantics of the data, (b) an iterative weighted sampling procedure is utilized to maintain modest computational complexity and to retain discrimination of extremely small subpopulations (hundreds of cells from datasets containing tens of millions), and (c) a splitting and merging procedure is incorporated in the algorithm to preserve distinguishability between biologically distinct populations, while still providing a significant compaction relative to the original data. This article presents a detailed algorithmic description of SWIFT, outlining the application-driven motivations for the different design choices, a discussion of computational complexity of the different steps, and results obtained with SWIFT for synthetic data and relatively simple experimental data that allow validation of the desirable attributes. A companion paper (Part 2) highlights the use of SWIFT, in combination with additional computational tools, for more challenging biological problems. © 2014 The Authors. Published by Wiley Periodicals Inc. PMID:24677621

  20. Quantification of accuracy of the automated nonlinear image matching and anatomical labeling (ANIMAL) nonlinear registration algorithm for 4D CT images of lung.

    PubMed

    Heath, E; Collins, D L; Keall, P J; Dong, L; Seuntjens, J

    2007-11-01

    The performance of the ANIMAL (Automated Nonlinear Image Matching and Anatomical Labeling) nonlinear registration algorithm for registration of thoracic 4D CT images was investigated. The algorithm was modified to minimize the incidence of deformation vector discontinuities that occur during the registration of lung images. Registrations were performed between the inhale and exhale phases for five patients. The registration accuracy was quantified by the cross-correlation of transformed and target images and distance to agreement (DTA) measured based on anatomical landmarks and triangulated surfaces constructed from manual contours. On average, the vector DTA between transformed and target landmarks was 1.6 mm. Comparing transformed and target 3D triangulated surfaces derived from planning contours, the average target volume (GTV) center-of-mass shift was 2.0 mm and the 3D DTA was 1.6 mm. An average DTA of 1.8 mm was obtained for all planning structures. All DTA metrics were comparable to inter observer uncertainties established for landmark identification and manual contouring.

  1. Datamining the NOAO NVO Portal: Automated Image Classification

    NASA Astrophysics Data System (ADS)

    Vaswani, Pooja; Miller, C. J.; Barg, I.; Smith, R. C.

    2006-12-01

    Image metadata describes the properties of an image and can be used for classification, e.g., galactic, extra-galactic, solar system, standard star, among others. We are developing a data mining application to automate such a classification process based on supervised learning using decision trees. We are applying this application to the NOAO NVO Portal (www.nvo.noao.edu). The core concepts of Quinlan's C4.5 decision tree induction algorithm are used to train, build a decision tree, and generate classification rules. These rules are then used to classify previously unseen image metadata. We utilize a collection of decision trees instead of a single classifier and average the classification probabilities. The concept of ``Bagging'' was used to create the collection of classifiers. The classification algorithm also facilitates the addition of weights to the probability estimate of the classes when prior knowledge of the class distribution is known.

  2. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation.

    PubMed

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-21

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δφ = 0.3 ± 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66 ± 0.04), Positive Predictive Value (PPV  = 0.81 ± 0.06) and Sensitivity (Sen. = 0.49 ± 0.05). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40 ± 30, DSC = 0.71 ± 0.07 and PPV = 0.90 ± 0.13). High accuracy in target tracking position (ΔME) was obtained for experimental and clinical data (ΔME(exp) = 0 ± 3 mm; ΔME(clin) 0.3 ± 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements

  3. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    NASA Astrophysics Data System (ADS)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  4. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

    PubMed

    Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

    2015-12-01

    Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features.

  5. Multi-objective genetic algorithm for the automated planning of a wireless sensor network to monitor a critical facility

    NASA Astrophysics Data System (ADS)

    Jourdan, Damien B.; de Weck, Olivier L.

    2004-09-01

    This paper examines the optimal placement of nodes for a Wireless Sensor Network (WSN) designed to monitor a critical facility in a hostile region. The sensors are dropped from an aircraft, and they must be connected (directly or via hops) to a High Energy Communication Node (HECN), which serves as a relay from the ground to a satellite or a high-altitude aircraft. The sensors are assumed to have fixed communication and sensing ranges. The facility is modeled as circular and served by two roads. This simple model is used to benchmark the performance of the optimizer (a Multi-Objective Genetic Algorithm, or MOGA) in creating WSN designs that provide clear assessments of movements in and out of the facility, while minimizing both the likelihood of sensors being discovered and the number of sensors to be dropped. The algorithm is also tested on two other scenarios; in the first one the WSN must detect movements in and out of a circular area, and in the second one it must cover uniformly a square region. The MOGA is shown again to perform well on those scenarios, which shows its flexibility and possible application to more complex mission scenarios with multiple and diverse targets of observation.

  6. Practice of Clinical Supervision.

    ERIC Educational Resources Information Center

    Holland, Patricia E.

    1988-01-01

    Clinical supervision remained grounded in empirical inquiry as late as Morris Cogan's writings on the subject in 1973. With the acknowledgment of Thomas Kuhn's (1962) paradigm shift, educational theory and practice developed interpretive methodologies. An interpretive reflection on Cogan's rationale offers insights into the current, matured…

  7. Supervision as Cultural Inquiry.

    ERIC Educational Resources Information Center

    Flinders, David J.

    1991-01-01

    Describes a framework for "culturally responsive supervision." An understanding of analogic or iconic metaphors reveals the power of language to shape what are regarded as matters of fact. Kinesics, proxemics, and prosody bring into focus channels of nonverbal communication. The concept of "framing" calls attention to the metamessages of verbal…

  8. Revisiting Supervised Agricultural Experience.

    ERIC Educational Resources Information Center

    Camp, William G.; Clarke, Ariane; Fallon, Maureen

    2000-01-01

    A Delphi panel of 40 agricultural educators unanimously agreed that supervised agricultural experience should remain an integral component of the curriculum; a name change is not currently warranted. Categories recommended were agribusiness entrepreneurship, placement, production, research, directed school lab, communications, exploration, and…

  9. Supervising Graduate Assistants

    ERIC Educational Resources Information Center

    White, Jessica; Nonnamaker, John

    2011-01-01

    Discussions of personnel management in student affairs literature and at national conferences often focus on supervising new or midlevel professionals and the myriad challenges and possibilities these relationships entail (Carpenter, 2001; Winston and Creamer, 1997). Graduate students as employees and the often-complicated and ill-structured…

  10. Automated treatment planning for a dedicated multi-source intracranial radiosurgery treatment unit using projected gradient and grassfire algorithms

    SciTech Connect

    Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.; Jaffray, David A.; Ruschin, Mark

    2012-06-15

    Purpose: The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife{sup Registered-Sign} Perfexion Trade-Mark-Sign (PFX) for intracranial targets. Methods: The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. Results: In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V{sub 100}) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V{sub 100}), the mean difference in dose to 1 mm{sup 3} of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an

  11. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    PubMed

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  12. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  13. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    PubMed

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus.

  14. Optimization of automated segmentation of monkeypox virus-induced lung lesions from normal lung CT images using hard C-means algorithm

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie

    2013-03-01

    Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.

  15. On Training Targets for Supervised Speech Separation

    PubMed Central

    Wang, Yuxuan; Narayanan, Arun; Wang, DeLiang

    2014-01-01

    Formulation of speech separation as a supervised learning problem has shown considerable promise. In its simplest form, a supervised learning algorithm, typically a deep neural network, is trained to learn a mapping from noisy features to a time-frequency representation of the target of interest. Traditionally, the ideal binary mask (IBM) is used as the target because of its simplicity and large speech intelligibility gains. The supervised learning framework, however, is not restricted to the use of binary targets. In this study, we evaluate and compare separation results by using different training targets, including the IBM, the target binary mask, the ideal ratio mask (IRM), the short-time Fourier transform spectral magnitude and its corresponding mask (FFT-MASK), and the Gammatone frequency power spectrum. Our results in various test conditions reveal that the two ratio mask targets, the IRM and the FFT-MASK, outperform the other targets in terms of objective intelligibility and quality metrics. In addition, we find that masking based targets, in general, are significantly better than spectral envelope based targets. We also present comparisons with recent methods in non-negative matrix factorization and speech enhancement, which show clear performance advantages of supervised speech separation. PMID:25599083

  16. Extracting PICO Sentences from Clinical Trial Reports using Supervised Distant Supervision.

    PubMed

    Wallace, Byron C; Kuiper, Joël; Sharma, Aakash; Zhu, Mingxi Brian; Marshall, Iain J

    2016-01-01

    Systematic reviews underpin Evidence Based Medicine (EBM) by addressing precise clinical questions via comprehensive synthesis of all relevant published evidence. Authors of systematic reviews typically define a Population/Problem, Intervention, Comparator, and Outcome (a PICO criteria) of interest, and then retrieve, appraise and synthesize results from all reports of clinical trials that meet these criteria. Identifying PICO elements in the full-texts of trial reports is thus a critical yet time-consuming step in the systematic review process. We seek to expedite evidence synthesis by developing machine learning models to automatically extract sentences from articles relevant to PICO elements. Collecting a large corpus of training data for this task would be prohibitively expensive. Therefore, we derive distant supervision (DS) with which to train models using previously conducted reviews. DS entails heuristically deriving 'soft' labels from an available structured resource. However, we have access only to unstructured, free-text summaries of PICO elements for corresponding articles; we must derive from these the desired sentence-level annotations. To this end, we propose a novel method - supervised distant supervision (SDS) - that uses a small amount of direct supervision to better exploit a large corpus of distantly labeled instances by learning to pseudo-annotate articles using the available DS. We show that this approach tends to outperform existing methods with respect to automated PICO extraction.

  17. A randomised controlled trial of an automated oxygen delivery algorithm for preterm neonates receiving supplemental oxygen without mechanical ventilation

    PubMed Central

    Zapata, James; Gómez, John Jairo; Araque Campo, Robinson; Matiz Rubio, Alejandro; Sola, Augusto

    2014-01-01

    Aim Providing consistent levels of oxygen saturation (SpO2) for infants in neonatal intensive care units is not easy. This study explored how effectively the Auto-Mixer® algorithm automatically adjusted fraction of inspired oxygen (FiO2) levels to maintain SpO2 within an intended range in extremely low birth weight infants receiving supplemental oxygen without mechanical ventilation. Methods Twenty extremely low birth weight infants were randomly assigned to the Auto-Mixer® group or the manual intervention group and studied for 12 h. The SpO2 target was 85–93%, and the outcomes were the percentage of time SpO2 was within target, SpO2 variability, SpO2 >95%, oxygen received and manual interventions. Results The percentage of time within intended SpO2 was 58 ± 4% in the Auto-Mixer® group and 33.7 ± 4.7% in the manual group, SpO2 >95% was 26.5% vs 54.8%, average SpO2 and FiO2 were 89.8% vs 92.2% and 37% vs 44.1%, and manual interventions were 0 vs 80 (p < 0.05). Brief periods of SpO2 < 85% occurred more frequently in the Auto-Mixer® group. Conclusion The Auto-Mixer® effectively increased the percentage of time that SpO2 was within the intended target range and decreased the time with high SpO2 in spontaneously breathing extremely low birth weight infants receiving supplemental oxygen. PMID:24813808

  18. A new algorithm for off-line automated emboli detection based on the pseudo-wigner power distribution and the dual gate TCD technique.

    PubMed

    Mess, W H; Titulaer, B M; Ackerstaff, R G

    2000-03-01

    Research on microembolic signals (MES) using the dual-gate technique has shown promising results, when the time difference (Deltat) of a MES in two sample volumes (SVs) placed serially has been measured manually. On the other hand, the computerized discrimination of MES and artefacts has been reported not to be superior to algorithms based on a single SV. Therefore, a dataset containing MES as well as four types of artefacts was made to test a preliminary version of a new algorithm for automated emboli detection. We monitored 20 patients during carotid endarterectomy (n = 17) and heart surgery (n = 3). Two transcranial Doppler (TCD) signals with a partial overlap of the SVs were recorded online and analysed off-line with an algorithm based on three consecutive steps: 1. Is there an intensity increase in both channels (64-point FFT; 50% overlap)? 2. What is the expected time difference (Deltat), with the velocity measured in channel 1 as the calculation basis? 3. What is the 'exact' Deltat (pseudo-Wigner power function)? Two human experts decided whether a signal was a MES or belonged to one of the four artefact groups. Of a total of 97 MES, 28% (n = 27) could not be detected in the distal channel. Thus, 72% (n = 70) of the MES were present in both channels and could be analysed based on the abovementioned criteria. Of these 70 MES, 87% (n = 61) were correctly identified off-line. We assessed artefact rejection for four different types of artefacts: changes of TCD settings, probe movement, low flow artefacts and electrocautery. The reliability of artefact rejection was 98% for setting changes (n = 382), 96% for probe movement (n = 477) and 98% for low flow artefacts (n = 91), but only 68% for electrocautery (n = 264). These preliminary results are promising, but need careful interpretation: 28% of the MES were not detectable in the distal SV, probably due to a poor signal-to-noise ratio (SNR) and anatomical restrictions. Electrocautery signals were insufficiently

  19. Supervised autonomous robotic soft tissue surgery.

    PubMed

    Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W

    2016-05-04

    The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques.

  20. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  1. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  2. Validity of an automated algorithm to identify waking and in-bed wear time in hip-worn accelerometer data collected with a 24 h wear protocol in young adults.

    PubMed

    McVeigh, Joanne A; Winkler, Elisabeth A H; Healy, Genevieve N; Slater, James; Eastwood, Peter R; Straker, Leon M

    2016-09-21

    Researchers are increasingly using 24 h accelerometer wear protocols. No automated method has been published that accurately distinguishes 'waking' wear time from other data ('in-bed', non-wear, invalid days) in young adults. This study examined the validity of an automated algorithm developed to achieve this for hip-worn Actigraph GT3X  +  60 s epoch data. We compared the algorithm against a referent method ('into-bed' and 'out-of-bed' times visually identified by two independent raters) and benchmarked against two published algorithms. All methods used the same non-wear rules. The development sample (n  =  11) and validation sample (n  =  95) were Australian young adults from the Raine pregnancy cohort (54% female), all aged approximately 22 years. The agreement with Rater 1 in each minute's classification (yes/no) of waking wear time was examined as kappa (κ), limited to valid days (⩾10 h waking wear time per day) according to the algorithm and Rater 1. Bland-Altman methods assessed agreement in daily totals of waking wear and in-bed wear time. Excellent agreement (κ  >  0.75) was obtained between the raters for 80% of participants (median κ  =  0.94). The algorithm showed excellent agreement with Rater 1 (κ  >  0.75) for 89% of participants and poor agreement (κ  <  0.40) for 1%. In this sample, the algorithm (median κ  =  0.86) performed better than algorithms validated in children (median κ  =  0.77) and adolescents (median κ  =  0.66). The mean difference (95% limits of agreement) between Rater 1 and the algorithm was 7 (-220, 234) min d(-1) for waking wear time on valid days and  -41 (-309, 228) min d(-1) for in-bed wear time. In this population, the automated algorithm's validity for identifying waking wear time was mostly good, not worse than inter-rater agreement, and better than the evaluated published alternatives. However, the algorithm requires

  3. FIGENIX: Intelligent automation of genomic annotation: expertise integration in a new software platform

    PubMed Central

    Gouret, Philippe; Vitiello, Vérane; Balandraud, Nathalie; Gilles, André; Pontarotti, Pierre; Danchin, Etienne GJ

    2005-01-01

    Background Two of the main objectives of the genomic and post-genomic era are to structurally and functionally annotate genomes which consists of detecting genes' position and structure, and inferring their function (as well as of other features of genomes). Structural and functional annotation both require the complex chaining of numerous different software, algorithms and methods under the supervision of a biologist. The automation of these pipelines is necessary to manage huge amounts of data released by sequencing projects. Several pipelines already automate some of these complex chaining but still necessitate an important contribution of biologists for supervising and controlling the results at various steps. Results Here we propose an innovative automated platform, FIGENIX, which includes an expert system capable to substitute to human expertise at several key steps. FIGENIX currently automates complex pipelines of structural and functional annotation under the supervision of the expert system (which allows for example to make key decisions, check intermediate results or refine the dataset). The quality of the results produced by FIGENIX is comparable to those obtained by expert biologists with a drastic gain in terms of time costs and avoidance of errors due to the human manipulation of data. Conclusion The core engine and expert system of the FIGENIX platform currently handle complex annotation processes of broad interest for the genomic community. They could be easily adapted to new, or more specialized pipelines, such as for example the annotation of miRNAs, the classification of complex multigenic families, annotation of regulatory elements and other genomic features of interest. PMID:16083500

  4. MAGIC: an automated N-linked glycoprotein identification tool using a Y1-ion pattern matching algorithm and in silico MS² approach.

    PubMed

    Lynn, Ke-Shiuan; Chen, Chen-Chun; Lih, T Mamie; Cheng, Cheng-Wei; Su, Wan-Chih; Chang, Chun-Hao; Cheng, Chia-Ying; Hsu, Wen-Lian; Chen, Yu-Ju; Sung, Ting-Yi

    2015-02-17

    Glycosylation is a highly complex modification influencing the functions and activities of proteins. Interpretation of intact glycopeptide spectra is crucial but challenging. In this paper, we present a mass spectrometry-based automated glycopeptide identification platform (MAGIC) to identify peptide sequences and glycan compositions directly from intact N-linked glycopeptide collision-induced-dissociation spectra. The identification of the Y1 (peptideY0 + GlcNAc) ion is critical for the correct analysis of unknown glycoproteins, especially without prior knowledge of the proteins and glycans present in the sample. To ensure accurate Y1-ion assignment, we propose a novel algorithm called Trident that detects a triplet pattern corresponding to [Y0, Y1, Y2] or [Y0-NH3, Y0, Y1] from the fragmentation of the common trimannosyl core of N-linked glycopeptides. To facilitate the subsequent peptide sequence identification by common database search engines, MAGIC generates in silico spectra by overwriting the original precursor with the naked peptide m/z and removing all of the glycan-related ions. Finally, MAGIC computes the glycan compositions and ranks them. For the model glycoprotein horseradish peroxidase (HRP) and a 5-glycoprotein mixture, a 2- to 31-fold increase in the relative intensities of the peptide fragments was achieved, which led to the identification of 7 tryptic glycopeptides from HRP and 16 glycopeptides from the mixture via Mascot. In the HeLa cell proteome data set, MAGIC processed over a thousand MS(2) spectra in 3 min on a PC and reported 36 glycopeptides from 26 glycoproteins. Finally, a remarkable false discovery rate of 0 was achieved on the N-glycosylation-free Escherichia coli data set. MAGIC is available at http://ms.iis.sinica.edu.tw/COmics/Software_MAGIC.html .

  5. Abdominal adipose tissue quantification on water-suppressed and non-water-suppressed MRI at 3T using semi-automated FCM clustering algorithm

    NASA Astrophysics Data System (ADS)

    Valaparla, Sunil K.; Peng, Qi; Gao, Feng; Clarke, Geoffrey D.

    2014-03-01

    Accurate measurements of human body fat distribution are desirable because excessive body fat is associated with impaired insulin sensitivity, type 2 diabetes mellitus (T2DM) and cardiovascular disease. In this study, we hypothesized that the performance of water suppressed (WS) MRI is superior to non-water suppressed (NWS) MRI for volumetric assessment of abdominal subcutaneous (SAT), intramuscular (IMAT), visceral (VAT), and total (TAT) adipose tissues. We acquired T1-weighted images on a 3T MRI system (TIM Trio, Siemens), which was analyzed using semi-automated segmentation software that employs a fuzzy c-means (FCM) clustering algorithm. Sixteen contiguous axial slices, centered at the L4-L5 level of the abdomen, were acquired in eight T2DM subjects with water suppression (WS) and without (NWS). Histograms from WS images show improved separation of non-fatty tissue pixels from fatty tissue pixels, compared to NWS images. Paired t-tests of WS versus NWS showed a statistically significant lower volume of lipid in the WS images for VAT (145.3 cc less, p=0.006) and IMAT (305 cc less, p<0.001), but not SAT (14.1 cc more, NS). WS measurements of TAT also resulted in lower fat volumes (436.1 cc less, p=0.002). There is strong correlation between WS and NWS quantification methods for SAT measurements (r=0.999), but poorer correlation for VAT studies (r=0.845). These results suggest that NWS pulse sequences may overestimate adipose tissue volumes and that WS pulse sequences are more desirable due to the higher contrast generated between fatty and non-fatty tissues.

  6. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  7. Supervision of Supervised Agricultural Experience Programs: A Synthesis of Research.

    ERIC Educational Resources Information Center

    Dyer, James E.; Williams, David L.

    1997-01-01

    A review of literature from 1964 to 1993 found that supervised agricultural experience (SAE) teachers, students, parents, and employers value the teachers' supervisory role. Implementation practices vary widely and there are no cumulative data to guide policies and standards for SAE supervision. (SK)

  8. A Collaboratively Supervised Teaching Internship: Implications for Future Supervision.

    ERIC Educational Resources Information Center

    Baker, Thomas E.

    This paper describes the 5-year Austin Teacher Program (ATP) at Austin College with emphasis on the collaboratively supervised internship in the graduate year. Some results of a comprehensive survey of over 400 ATP graduates are discussed, as well as issues and needs in the supervision of interns, and implications for the future in the supervision…

  9. Exploring Clinical Supervision as Instrument for Effective Teacher Supervision

    ERIC Educational Resources Information Center

    Ibara, E. C.

    2013-01-01

    This paper examines clinical supervision approaches that have the potential to promote and implement effective teacher supervision in Nigeria. The various approaches have been analysed based on the conceptual framework of instructional supervisory behavior. The findings suggest that a clear distinction can be made between the prescriptive and…

  10. Supervision Learning as Conceptual Threshold Crossing: When Supervision Gets "Medieval"

    ERIC Educational Resources Information Center

    Carter, Susan

    2016-01-01

    This article presumes that supervision is a category of teaching, and that we all "learn" how to teach better. So it enquires into what novice supervisors need to learn. An anonymised digital questionnaire sought data from supervisors [n226] on their experiences of supervision to find out what was difficult, and supervisor interviews…

  11. Design of Supervision Systems: Theory and Practice

    NASA Astrophysics Data System (ADS)

    Bouamama, Belkacem Ould

    2008-06-01

    The term "supervision" means a set of tools and methods used to operate an industrial process in normal situation as well as in the presence of failures or undesired disturbances. The activities concerned with the supervision are the Fault Detection and Isolation (FDI) in the diagnosis level, and the Fault Tolerant Control (FTC) through necessary reconfiguration, whenever possible, in the fault accommodation level. The final goal of a supervision platform is to provide the operator a set of tools that helps to safely run the process and to take appropriate decision in the presence of faults. Different approaches to the design of such decision making tools have been developed in the past twenty years, depending on the kind of knowledge (structural, statistical, fuzzy, expert rules, functional, behavioural…) used to describe the plant operation. How to elaborate models for FDI design, how to develop the FDI algorithm, how to avoid false alarms, how to improve the diagnosability of the faults for alarm management design, how to continue to control the process in failure mode, what are the limits of each method,…?. Such are the main purposes concerned by the presented plenary session from an industrial and theoretical point of view.

  12. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  13. Dynamic hierarchical algorithm for accelerated microfossil identification

    NASA Astrophysics Data System (ADS)

    Wong, Cindy M.; Joseph, Dileepan

    2015-02-01

    Marine microfossils provide a useful record of the Earth's resources and prehistory via biostratigraphy. To study Hydrocarbon reservoirs and prehistoric climate, geoscientists visually identify the species of microfossils found in core samples. Because microfossil identification is labour intensive, automation has been investigated since the 1980s. With the initial rule-based systems, users still had to examine each specimen under a microscope. While artificial neural network systems showed more promise for reducing expert labour, they also did not displace manual identification for a variety of reasons, which we aim to overcome. In our human-based computation approach, the most difficult step, namely taxon identification is outsourced via a frontend website to human volunteers. A backend algorithm, called dynamic hierarchical identification, uses unsupervised, supervised, and dynamic learning to accelerate microfossil identification. Unsupervised learning clusters specimens so that volunteers need not identify every specimen during supervised learning. Dynamic learning means interim computation outputs prioritize subsequent human inputs. Using a dataset of microfossils identified by an expert, we evaluated correct and incorrect genus and species rates versus simulated time, where each specimen identification defines a moment. The proposed algorithm accelerated microfossil identification effectively, especially compared to benchmark results obtained using a k-nearest neighbour method.

  14. Clinical supervision for nurse lecturers.

    PubMed

    Lewis, D

    This article builds on a previous one which discussed the use of de Bono's thinking tool, 'six thinking hats' in the clinical, managerial, educational and research areas of nursing (Lewis 1995). This article explores clinical supervision and describes how the six thinking hats may be used as a reflective tool in the supervision of nurse lecturers who teach counselling skills.

  15. Coronary CTA using scout-based automated tube potential and current selection algorithm, with breast displacement results in lower radiation exposure in females compared to males

    PubMed Central

    Vadvala, Harshna; Kim, Phillip; Mayrhofer, Thomas; Pianykh, Oleg; Kalra, Mannudeep; Hoffmann, Udo

    2014-01-01

    Purpose To evaluate the effect of automatic tube potential selection and automatic exposure control combined with female breast displacement during coronary computed tomography angiography (CCTA) on radiation exposure in women versus men of the same body size. Materials and methods Consecutive clinical exams between January 2012 and July 2013 at an academic medical center were retrospectively analyzed. All examinations were performed using ECG-gating, automated tube potential, and tube current selection algorithm (APS-AEC) with breast displacement in females. Cohorts were stratified by sex and standard World Health Organization body mass index (BMI) ranges. CT dose index volume (CTDIvol), dose length product (DLP) median effective dose (ED), and size specific dose estimate (SSDE) were recorded. Univariable and multivariable regression analyses were performed to evaluate the effect of gender on radiation exposure per BMI. Results A total of 726 exams were included, 343 (47%) were females; mean BMI was similar by gender (28.6±6.9 kg/m2 females vs. 29.2±6.3 kg/m2 males; P=0.168). Median ED was 2.3 mSv (1.4-5.2) for females and 3.6 (2.5-5.9) for males (P<0.001). Females were exposed to less radiation by a difference in median ED of –1.3 mSv, CTDIvol –4.1 mGy, and SSDE –6.8 mGy (all P<0.001). After adjusting for BMI, patient characteristics, and gating mode, females exposure was lower by a median ED of –0.7 mSv, CTDIvol –2.3 mGy, and SSDE –3.15 mGy, respectively (all P<0.01). Conclusions: We observed a difference in radiation exposure to patients undergoing CCTA with the combined use of AEC-APS and breast displacement in female patients as compared to their BMI-matched male counterparts, with female patients receiving one third less exposure. PMID:25610804

  16. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  17. An Automated Treatment Plan Quality Control Tool for Intensity-Modulated Radiation Therapy Using a Voxel-Weighting Factor-Based Re-Optimization Algorithm.

    PubMed

    Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura

    2016-01-01

    Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be

  18. Semi-supervised multi-label collective classification ensemble for functional genomics

    PubMed Central

    2014-01-01

    Background With the rapid accumulation of proteomic and genomic datasets in terms of genome-scale features and interaction networks through high-throughput experimental techniques, the process of manual predicting functional properties of the proteins has become increasingly cumbersome, and computational methods to automate this annotation task are urgently needed. Most of the approaches in predicting functional properties of proteins require to either identify a reliable set of labeled proteins with similar attribute features to unannotated proteins, or to learn from a fully-labeled protein interaction network with a large amount of labeled data. However, acquiring such labels can be very difficult in practice, especially for multi-label protein function prediction problems. Learning with only a few labeled data can lead to poor performance as limited supervision knowledge can be obtained from similar proteins or from connections between them. To effectively annotate proteins even in the paucity of labeled data, it is important to take advantage of all data sources that are available in this problem setting, including interaction networks, attribute feature information, correlations of functional labels, and unlabeled data. Results In this paper, we show that the underlying nature of predicting functional properties of proteins using various data sources of relational data is a typical collective classification (CC) problem in machine learning. The protein functional prediction task with limited annotation is then cast into a semi-supervised multi-label collective classification (SMCC) framework. As such, we propose a novel generative model based SMCC algorithm, called GM-SMCC, to effectively compute the label probability distributions of unannotated protein instances and predict their functional properties. To further boost the predicting performance, we extend the method in an ensemble manner, called EGM-SMCC, by utilizing multiple heterogeneous networks with

  19. A Semi-Supervised Learning Approach to Enhance Health Care Community–Based Question Answering: A Case Study in Alcoholism

    PubMed Central

    Klabjan, Diego; Jonnalagadda, Siddhartha Reddy

    2016-01-01

    Background Community-based question answering (CQA) sites play an important role in addressing health information needs. However, a significant number of posted questions remain unanswered. Automatically answering the posted questions can provide a useful source of information for Web-based health communities. Objective In this study, we developed an algorithm to automatically answer health-related questions based on past questions and answers (QA). We also aimed to understand information embedded within Web-based health content that are good features in identifying valid answers. Methods Our proposed algorithm uses information retrieval techniques to identify candidate answers from resolved QA. To rank these candidates, we implemented a semi-supervised leaning algorithm that extracts the best answer to a question. We assessed this approach on a curated corpus from Yahoo! Answers and compared against a rule-based string similarity baseline. Results On our dataset, the semi-supervised learning algorithm has an accuracy of 86.2%. Unified medical language system–based (health related) features used in the model enhance the algorithm’s performance by proximately 8%. A reasonably high rate of accuracy is obtained given that the data are considerably noisy. Important features distinguishing a valid answer from an invalid answer include text length, number of stop words contained in a test question, a distance between the test question and other questions in the corpus, and a number of overlapping health-related terms between questions. Conclusions Overall, our automated QA system based on historical QA pairs is shown to be effective according to the dataset in this case study. It is developed for general use in the health care domain, which can also be applied to other CQA sites. PMID:27485666

  20. Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement

    NASA Astrophysics Data System (ADS)

    Umansky, Moti; Weihs, Daphne

    2012-08-01

    In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program

  1. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning

    PubMed Central

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-01-01

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images’ spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines. PMID:27367703

  2. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  3. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  4. Learning for Microblogs with Distant Supervision: Political Forecasting with Twitter

    DTIC Science & Technology

    2012-04-01

    is expensive, recent work on Twit- ter uses emoticons (i.e., ASCII smiley faces such as :-( and :-)) as noisy labels in tweets for distant supervision...supervision has grown in popular- ity. These algorithms use emoticons to serve as semantic indicators for sentiment. For instance, a sad face (e.g...serves as a noisy label for a negative mood. Read (2005) was the first to sug- gest emoticons for UseNet data, followed by Go et al. (Go et al., 2009) on

  5. Metrology automation reliability

    NASA Astrophysics Data System (ADS)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  6. Supervised Dictionary Learning

    DTIC Science & Technology

    2008-11-01

    recently led to state-of-the-art results for numerous low-level image processing tasks such as denoising [2], show- ing that sparse models are well... denoising via sparse and redundant representations over learned dictio- naries. IEEE Trans. IP, 54(12), 2006. [3] K. Huang and S. Aviyente. Sparse...2006. [19] M. Aharon, M. Elad, and A. M. Bruckstein. The K- SVD : An algorithm for designing of overcomplete dictionaries for sparse representations

  7. Educational Supervision Appropriate for Psychiatry Trainee's Needs

    ERIC Educational Resources Information Center

    Rele, Kiran; Tarrant, C. Jane

    2010-01-01

    Objective: The authors studied the regularity and content of supervision sessions in one of the U.K. postgraduate psychiatric training schemes (Mid-Trent). Methods: A questionnaire sent to psychiatry trainees assessed the timing and duration of supervision, content and protection of supervision time, and overall quality of supervision. The authors…

  8. Supervised learning of probability distributions by neural networks

    NASA Technical Reports Server (NTRS)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  9. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Imamura, M. S.; Moser, R. L.; Veatch, M.

    1983-01-01

    Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.

  10. Automating the Modeling of the SEE Cross Section's Angular Dependence

    NASA Technical Reports Server (NTRS)

    Patterson, J. D.; Edmonds, L. D.

    2003-01-01

    An algorithm that automates the application of the alpha law in any SEE analysis is presented. This automation is essential for the widespread acceptance of the sophisticated cross section angular dependence model.

  11. Improved semi-supervised online boosting for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Yicui; Qi, Lin; Tan, Shukun

    2016-10-01

    The advantage of an online semi-supervised boosting method which takes object tracking problem as a classification problem, is training a binary classifier from labeled and unlabeled examples. Appropriate object features are selected based on real time changes in the object. However, the online semi-supervised boosting method faces one key problem: The traditional self-training using the classification results to update the classifier itself, often leads to drifting or tracking failure, due to the accumulated error during each update of the tracker. To overcome the disadvantages of semi-supervised online boosting based on object tracking methods, the contribution of this paper is an improved online semi-supervised boosting method, in which the learning process is guided by positive (P) and negative (N) constraints, termed P-N constraints, which restrict the labeling of the unlabeled samples. First, we train the classification by an online semi-supervised boosting. Then, this classification is used to process the next frame. Finally, the classification is analyzed by the P-N constraints, which are used to verify if the labels of unlabeled data assigned by the classifier are in line with the assumptions made about positive and negative samples. The proposed algorithm can effectively improve the discriminative ability of the classifier and significantly alleviate the drifting problem in tracking applications. In the experiments, we demonstrate real-time tracking of our tracker on several challenging test sequences where our tracker outperforms other related on-line tracking methods and achieves promising tracking performance.

  12. Supervised Protein Family Classification and New Family Construction

    PubMed Central

    Yi, Gangman; Thon, Michael R.

    2012-01-01

    Abstract The goal of protein family classification is to group proteins into families so that proteins within the same family have common function or are related by ancestry. While supervised classification algorithms are available for this purpose, most of these approaches focus on assigning unclassified proteins to known families but do not allow for progressive construction of new families from proteins that cannot be assigned. Although unsupervised clustering algorithms are also available, they do not make use of information from known families. By computing similarities between proteins based on pairwise sequence comparisons, we develop supervised classification algorithms that achieve improved accuracy over previous approaches while allowing for construction of new families. We show that our algorithm has higher accuracy rate and lower mis-classification rate when compared to algorithms that are based on the use of multiple sequence alignments and hidden Markov models, and our algorithm performs well even on families with very few proteins and on families with low sequence similarity. A software program implementing the algorithm (SClassify) is available online (http://faculty.cse.tamu.edu/shsze/sclassify). PMID:22876787

  13. Supervised Gamma Process Poisson Factorization

    SciTech Connect

    Anderson, Dylan Zachary

    2015-05-01

    This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.

  14. Automated Road Extraction from High Resolution Multispectral Imagery

    SciTech Connect

    Doucette, Peter J.; Agouris, Peggy; Stefanidis, Anthony

    2004-12-01

    Road networks represent a vital component of geospatial data sets in high demand, and thus contribute significantly to extraction labor costs. Multispectral imagery has only recently become widely available at high spatial resolutions, and modeling spectral content has received limited consideration for road extraction algorithms. This paper presents a methodology that exploits spectral content for fully automated road centerline extraction. Preliminary detection of road centerline pixel candidates is performed with Anti-parallel-edge Centerline Extraction (ACE). This is followed by constructing a road vector topology with a fuzzy grouping model that links nodes from a self-organized mapping of the ACE pixels. Following topology construction, a self-supervised road classification (SSRC) feedback loop is implemented to automate the process of training sample selection and refinement for a road class, as well deriving practical spectral definitions for non-road classes. SSRC demonstrates a potential to provide dramatic improvement in road extraction results by exploiting spectral content. Road centerline extraction results are presented for three 1m color-infrared suburban scenes, which show significant improvement following SSRC.

  15. Prediction of a Flash Flood in Complex Terrain. Part I: A Comparison of Rainfall Estimates from Radar, and Very Short Range Rainfall Simulations from a Dynamic Model and an Automated Algorithmic System.

    NASA Astrophysics Data System (ADS)

    Warner, Thomas T.; Brandes, Edward A.; Sun, Juanzhen; Yates, David N.; Mueller, Cynthia K.

    2000-06-01

    Operational prediction of flash floods caused by convective rainfall in mountainous areas requires accurate estimates or predictions of the rainfall distribution in space and time. The details of the spatial distribution are especially critical in complex terrain because the watersheds generally are small in size, and position errors in the placement of the rainfall can distribute the rain over the wrong watershed. In addition to the need for good rainfall estimates, accurate flood prediction requires a surface-hydrologic model that is capable of predicting stream or river discharge based on the rainfall-rate input data. In part 1 of this study, different techniques for the estimation and prediction of convective rainfall are applied to the Buffalo Creek, Colorado, flash flood of July 1996, during which over 75 mm of rain from a thunderstorm fell on the watershed in less than 1 h. The hydrologic impact of the rainfall was exacerbated by the fact that a considerable fraction of the watershed experienced a wildfire approximately two months prior to the rain event.Precipitation estimates from the National Weather Service Weather Surveillance Radar-1988 Doppler and the National Center for Atmospheric Research S-band, dual-polarization radar, collocated east of Denver, Colorado, were compared. Very short range simulations from a convection-resolving dynamic model that was initialized variationally using the radar reflectivity and Doppler winds were compared with simulations from an automated algorithmic forecast system that also employs the radar data. The radar estimates of rain rate and the two forecasting systems that employ the radar data have degraded accuracy by virtue of the fact that they are applied in complex terrain. Nevertheless, the dynamic model and automated algorithms both produce simulations that could be useful operationally for input to surface-hydrologic models employed for flood warning. Part 2 of this study, reported in a companion paper, describes

  16. Supervised and Unsupervised Classification Using Mixture Models

    NASA Astrophysics Data System (ADS)

    Girard, S.; Saracco, J.

    2016-05-01

    This chapter is dedicated to model-based supervised and unsupervised classification. Probability distributions are defined over possible labels as well as over the observations given the labels. To this end, the basic tools are the mixture models. This methodology yields a posterior distribution over the labels given the observations which allows to quantify the uncertainty of the classification. The role of Gaussian mixture models is emphasized leading to Linear Discriminant Analysis and Quadratic Discriminant Analysis methods. Some links with Fisher Discriminant Analysis and logistic regression are also established. The Expectation-Maximization algorithm is introduced and compared to the K-means clustering method. The methods are illustrated both on simulated datasets as well as on real datasets using the R software.

  17. Blinking supervision in a working environment

    NASA Astrophysics Data System (ADS)

    Morcego, Bernardo; Argilés, Marc; Cabrerizo, Marc; Cardona, Genís; Pérez, Ramon; Pérez-Cabré, Elisabet; Gispets, Joan

    2016-02-01

    The health of the ocular surface requires blinks of the eye to be frequent in order to provide moisture and to renew the tear film. However, blinking frequency has been shown to decrease in certain conditions such as when subjects are conducting tasks with high cognitive and visual demands. These conditions are becoming more common as people work or spend their leisure time in front of video display terminals. Supervision of blinking frequency in such environments is possible, thanks to the availability of computer-integrated cameras. Therefore, the aim of the present study is to develop an algorithm for the detection of eye blinks and to test it, in a number of videos captured, while subjects are conducting a variety of tasks in front of the computer. The sensitivity of the algorithm for blink detection was found to be of 87.54% (range 30% to 100%), with a mean false-positive rate of 0.19% (range 0% to 1.7%), depending on the illumination conditions during which the image was captured and other computer-user spatial configurations. The current automatic process is based on a partly modified pre-existing eye detection and image processing algorithms and consists of four stages that are aimed at eye detection, eye tracking, iris detection and segmentation, and iris height/width ratio assessment.

  18. Blinking supervision in a working environment.

    PubMed

    Morcego, Bernardo; Argilés, Marc; Cabrerizo, Marc; Cardona, Genís; Pérez, Ramon; Pérez-Cabré, Elisabet; Gispets, Joan

    2016-02-01

    The health of the ocular surface requires blinks of the eye to be frequent in order to provide moisture and to renew the tear film. However, blinking frequency has been shown to decrease in certain conditions such as when subjects are conducting tasks with high cognitive and visual demands. These conditions are becoming more common as people work or spend their leisure time in front of video display terminals. Supervision of blinking frequency in such environments is possible, thanks to the availability of computer-integrated cameras. Therefore, the aim of the present study is to develop an algorithm for the detection of eye blinks and to test it, in a number of videos captured, while subjects are conducting a variety of tasks in front of the computer. The sensitivity of the algorithm for blink detection was found to be of 87.54% (range 30% to 100%), with a mean false-positive rate of 0.19% (range 0% to 1.7%), depending on the illumination conditions during which the image was captured and other computer–user spatial configurations. The current automatic process is based on a partly modified pre-existing eye detection and image processing algorithms and consists of four stages that are aimed at eye detection, eye tracking, iris detection and segmentation, and iris height/width ratio assessment.

  19. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  20. Automated Speech Rate Measurement in Dysarthria

    ERIC Educational Resources Information Center

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  1. SAR image segmentation with entropy ranking based adaptive semi-supervised spectral clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangrong; Yang, Jie; Hou, Biao; Jiao, Licheng

    2010-10-01

    Spectral clustering has become one of the most popular modern clustering algorithms in recent years. In this paper, a new algorithm named entropy ranking based adaptive semi-supervised spectral clustering for SAR image segmentation is proposed. We focus not only on finding a suitable scaling parameter but also determining automatically the cluster number with the entropy ranking theory. Also, two kinds of constrains must-link and cannot-link based semi-supervised spectral clustering is applied to gain better segmentation results. Experimental results on SAR images show that the proposed method outperforms other spectral clustering algorithms.

  2. Broad Absorption Line Quasar catalogues with Supervised Neural Networks

    SciTech Connect

    Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.; Goad, Michael R.

    2008-12-05

    We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.

  3. Automatic Classification Using Supervised Learning in a Medical Document Filtering Application.

    ERIC Educational Resources Information Center

    Mostafa, J.; Lam, W.

    2000-01-01

    Presents a multilevel model of the information filtering process that permits document classification. Evaluates a document classification approach based on a supervised learning algorithm, measures the accuracy of the algorithm in a neural network that was trained to classify medical documents on cell biology, and discusses filtering…

  4. Enhancing the usability and performance of structured association mapping algorithms using automation, parallelization, and visualization in the GenAMap software system

    PubMed Central

    2012-01-01

    Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660

  5. Classification of Automated Search Traffic

    NASA Astrophysics Data System (ADS)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  6. Active Semi-Supervised Learning Method with Hybrid Deep Belief Networks

    PubMed Central

    Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong

    2014-01-01

    In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively. PMID:25208128

  7. Active semi-supervised learning method with hybrid deep belief networks.

    PubMed

    Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong

    2014-01-01

    In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.

  8. "Learning Supervision": Trial by Fire

    ERIC Educational Resources Information Center

    Amundsen, Cheryl; McAlpine, Lynn

    2009-01-01

    This paper explores the experiences of new graduate supervisors, individuals who have just moved as it were from one side of "the table" to the other. We describe how their learning to "do supervision" relates to their understanding of academic work and how they make sense of the transition from doctoral student, someone…

  9. Bibliosupervision: A Creative Supervision Technique

    ERIC Educational Resources Information Center

    Graham, Mary Amanda; Pehrsson, Dale-Elizabeth

    2009-01-01

    This article offers a guide for bibliosupervision, a creative intervention that can be used when supervising counseling students. Bibliosupervision assists students in developing trust with the supervisor, as well as trust in their own abilities as emerging counselors. This supervisory process promotes the exploration of themes that might…

  10. Consultative Instructor Supervision and Evaluation

    ERIC Educational Resources Information Center

    Lee, William W.

    2010-01-01

    Organizations vary greatly in how they monitor training instructors. The methods used in monitoring vary greatly. This article presents a systematic process for improving instructor skills that result in better teaching and better learning, which results in better-prepared employees for the workforce. The consultative supervision and evaluation…

  11. Transforming Staff through Clinical Supervision

    ERIC Educational Resources Information Center

    Pfeifer, Douglas

    2011-01-01

    In order to continue to do great work with challenging youth, teachers should know that learning helps them be better professionals. Clinical supervision is one of the vehicles used. In a Re-ED program, those who work directly with youth (called teacher-counselors) are the primary agents of change. This makes it necessary to equip them with the…

  12. Butterflies, Bugs, and Supervising Teachers.

    ERIC Educational Resources Information Center

    Womack, Sid T.

    This study replicated one conducted in Texas in 1979. Student teachers were asked to list the beautiful things their supervising teachers did for them as well as the things that "bugged" them. Comparison of the results of the 1979 and 1989 studies indicated that the positive factors in the relationships were very similar. Positive…

  13. The "Effectiveness" of Differential Supervision

    ERIC Educational Resources Information Center

    Harris, Patricia M.; Gingerich, Raymond; Whittaker, Tiffany A.

    2004-01-01

    This article presents an evaluation of the Client Management Classification System (CMC), a method for assessment and differential supervision of offenders that embodies the principle of responsivity. As in prior evaluations of the CMC, probationers whose officers were trained in CMC techniques experienced lower rates of revocation compared with…

  14. Learning to Supervise: Four Journeys

    ERIC Educational Resources Information Center

    Turner, Gill

    2015-01-01

    This article explores the experiences of four early career academics as they begin to undertake doctoral supervision. Each supervisor focused on one of their supervisees and drew and described a Journey Plot depicting the high and low points of their supervisory experience with their student. Two questions were addressed by the research: (1) How…

  15. Using Technology in School Supervision.

    ERIC Educational Resources Information Center

    Bercik, Janet T.; Blair-Larsen, Susan M.

    2000-01-01

    Notes that few college faculty use technology in their teaching despite rapid growth in technology-based instruction in K-12 education. Describes two projects using AT&T's PersonaLink Service and Sony's Magic Link PIC 1000 in field experience supervision. (SG)

  16. SALSA: a pattern recognition algorithm to detect electrophile-adducted peptides by automated evaluation of CID spectra in LC-MS-MS analyses.

    PubMed

    Hansen, B T; Jones, J A; Mason, D E; Liebler, D C

    2001-04-15

    A pattern recognition algorithm called SALSA (scoring algorithm for spectral analysis) has been developed to rapidly screen large numbers of peptide MS-MS spectra for fragmentation characteristics indicative of specific peptide modifications. The algorithm facilitates sensitive and specific detection of modified peptides at low abundance in an enzymatic protein digest. SALSA can simultaneously score multiple user-specified search criteria, including product ions, neutral losses, charged losses, and ion pairs that are diagnostic of specific peptide modifications. Application of SALSA to the detection of peptide adducts of the electrophiles dehydromonocrotaline, benzoquinone, and iodoacetic acid permitted their detection in a complex tryptic peptide digest mixture. SALSA provides superior detection of adducted peptides compared to conventional tandem MS precursor ion or neutral loss scans.

  17. Automation pilot

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An important concept of the Action Information Management System (AIMS) approach is to evaluate office automation technology in the context of hands on use by technical program managers in the conduct of human acceptance difficulties which may accompany the transition to a significantly changing work environment. The improved productivity and communications which result from application of office automation technology are already well established for general office environments, but benefits unique to NASA are anticipated and these will be explored in detail.

  18. The Automated Logistics Element Planning System (ALEPS)

    NASA Technical Reports Server (NTRS)

    Schwaab, Douglas G.

    1991-01-01

    The design and functions of ALEPS (Automated Logistics Element Planning System) is a computer system that will automate planning and decision support for Space Station Freedom Logistical Elements (LEs) resupply and return operations. ALEPS provides data management, planning, analysis, monitoring, interfacing, and flight certification for support of LE flight load planning activities. The prototype ALEPS algorithm development is described.

  19. Exploring the Black Box of Community Supervision

    ERIC Educational Resources Information Center

    Bonta, James; Rugge, Tanya; Scott, Terri-Lynne; Bourgon, Guy; Yessine, Annie K.

    2008-01-01

    Community supervision has been an integral part of corrections since the establishment of probation more than 100 years ago. It has commonly been assumed that offenders benefit from community supervision much more than if they were incarcerated. However, empirical evidence in support of the effectiveness of community supervision in reducing…

  20. 48 CFR 836.572 - Government supervision.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Government supervision... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 836.572 Government supervision. The contracting officer shall insert the clause at 852.236-78, Government supervision,...

  1. 48 CFR 836.572 - Government supervision.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Government supervision... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 836.572 Government supervision. The contracting officer shall insert the clause at 852.236-78, Government supervision,...

  2. 48 CFR 836.572 - Government supervision.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Government supervision... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 836.572 Government supervision. The contracting officer shall insert the clause at 852.236-78, Government supervision,...

  3. 48 CFR 836.572 - Government supervision.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Government supervision... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 836.572 Government supervision. The contracting officer shall insert the clause at 852.236-78, Government supervision,...

  4. 48 CFR 836.572 - Government supervision.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Government supervision... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 836.572 Government supervision. The contracting officer shall insert the clause at 852.236-78, Government supervision,...

  5. Multicultural Supervision: What Difference Does Difference Make?

    ERIC Educational Resources Information Center

    Eklund, Katie; Aros-O'Malley, Megan; Murrieta, Imelda

    2014-01-01

    Multicultural sensitivity and competency represent critical components to contemporary practice and supervision in school psychology. Internship and supervision experiences are a capstone experience for many new school psychologists; however, few receive formal training and supervision in multicultural competencies. As an increased number of…

  6. Use of Live Supervision in Counselor Preparation.

    ERIC Educational Resources Information Center

    Bubenzer, Donald L.; And Others

    1991-01-01

    Investigated live supervision in counselor preparation programs by surveying 307 counselor preparation programs. Live supervision was used at 157 institutions and was used in preparing individual, group, and marriage and family counselors. At least 75 percent of programs provided live supervision weekly. Techniques of cotherapy and remote viewing…

  7. Competency-Based Student-Teacher Supervision

    ERIC Educational Resources Information Center

    Spanjer, R. Allan

    1975-01-01

    This author contends that student-teacher supervision cannot be done effectively in traditional ways. He discusses five myths of supervision and explains a program developed at Portland (Ore.) State University that puts the emphasis where it should be--on the supervising teacher. (Editor)

  8. A National Survey of School Counselor Supervision Practices: Administrative, Clinical, Peer, and Technology Mediated Supervision

    ERIC Educational Resources Information Center

    Perera-Diltz, Dilani M.; Mason, Kimberly L.

    2012-01-01

    Supervision is vital for personal and professional development of counselors. Practicing school counselors (n = 1557) across the nation were surveyed to explore current supervision practices. Results indicated that 41.1% of school counselors provide supervision. Although 89% receive some type of supervision, only 10.3% of school counselors receive…

  9. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  10. Supervision Experiences of Professional Counselors Providing Crisis Counseling

    ERIC Educational Resources Information Center

    Dupre, Madeleine; Echterling, Lennis G.; Meixner, Cara; Anderson, Robin; Kielty, Michele

    2014-01-01

    In this phenomenological study, the authors explored supervision experiences of 13 licensed professional counselors in situations requiring crisis counseling. Five themes concerning crisis and supervision were identified from individual interviews. Findings support intensive, immediate crisis supervision and postlicensure clinical supervision.

  11. Application of Contingency Theories to the Supervision of Student Teachers.

    ERIC Educational Resources Information Center

    Phelps, Julia D.

    1985-01-01

    This article examines selected approaches to student teacher supervision within the context of contingency theory. These include authentic supervision, developmental supervision, and supervision based on the student's level of maturity. (MT)

  12. Software design for automated assembly of truss structures

    NASA Technical Reports Server (NTRS)

    Herstrom, Catherine L.; Grantham, Carolyn; Allen, Cheryl L.; Doggett, William R.; Will, Ralph W.

    1992-01-01

    Concern over the limited intravehicular activity time has increased the interest in performing in-space assembly and construction operations with automated robotic systems. A technique being considered at LaRC is a supervised-autonomy approach, which can be monitored by an Earth-based supervisor that intervenes only when the automated system encounters a problem. A test-bed to support evaluation of the hardware and software requirements for supervised-autonomy assembly methods was developed. This report describes the design of the software system necessary to support the assembly process. The software is hierarchical and supports both automated assembly operations and supervisor error-recovery procedures, including the capability to pause and reverse any operation. The software design serves as a model for the development of software for more sophisticated automated systems and as a test-bed for evaluation of new concepts and hardware components.

  13. Design of partially supervised classifiers for multispectral image data

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David

    1993-01-01

    A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.

  14. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning

    PubMed Central

    Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P.; Zelikowsky, Moriel; Navonne, Santiago G.; Perona, Pietro; Anderson, David J.

    2015-01-01

    A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body “pose” of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics. PMID:26354123

  15. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    NASA Astrophysics Data System (ADS)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  16. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning.

    PubMed

    Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P; Zelikowsky, Moriel; Navonne, Santiago G; Perona, Pietro; Anderson, David J

    2015-09-22

    A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body "pose" of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics.

  17. Fatigue Level Estimation of Bill Based on Acoustic Signal Feature by Supervised SOM

    NASA Astrophysics Data System (ADS)

    Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa

    Fatigued bills have harmful influence on daily operation of Automated Teller Machine(ATM). To make the fatigued bills classification more efficient, development of an automatic fatigued bill classification method is desired. We propose a new method to estimate bending rigidity of bill from acoustic signal feature of banking machines. The estimated bending rigidities are used as continuous fatigue level for classification of fatigued bill. By using the supervised Self-Organizing Map(supervised SOM), we estimate the bending rigidity from only the acoustic energy pattern effectively. The experimental result with real bill samples shows the effectiveness of the proposed method.

  18. Office automation.

    PubMed

    Arenson, R L

    1986-03-01

    By now, the term "office automation" should have more meaning for those readers who are not intimately familiar with the subject. Not all of the preceding material pertains to every department or practice, but certainly, word processing and simple telephone management are key items. The size and complexity of the organization will dictate the usefulness of electronic mail and calendar management, and the individual radiologist's personal needs and habits will determine the usefulness of the home computer. Perhaps the most important ingredient for success in the office automation arena relates to the ability to integrate information from various systems in a simple and flexible manner. Unfortunately, this is perhaps the one area that most office automation systems have ignored or handled poorly. In the personal computer world, there has been much emphasis recently on integration of packages such as spreadsheet, database management, word processing, graphics, time management, and communications. This same philosophy of integration has been applied to a few office automation systems, but these are generally vendor-specific and do not allow for a mixture of foreign subsystems. During the next few years, it is likely that a few vendors will emerge as dominant in this integrated office automation field and will stress simplicity and flexibility as major components.

  19. [Software version and medical device software supervision].

    PubMed

    Peng, Liang; Liu, Xiaoyan

    2015-01-01

    The importance of software version in the medical device software supervision does not cause enough attention at present. First of all, the effect of software version in the medical device software supervision is discussed, and then the necessity of software version in the medical device software supervision is analyzed based on the discussion of the misunderstanding of software version. Finally the concrete suggestions on software version naming rules, software version supervision for the software in medical devices, and software version supervision scheme are proposed.

  20. Automated Approaches to RFI Flagging

    NASA Astrophysics Data System (ADS)

    Garimella, Karthik; Momjian, Emmanuel

    2017-01-01

    It is known that Radio Frequency Interference (RFI) is a major issue in centimeter wavelength radio astronomy. Radio astronomy software packages include tools to excise RFI; both manual and automated utilizing the visibilities (the uv data). Here we present results on an automated RFI flagging approach that utilizes a uv-grid, which is the intermediate product when converting uv data points to an image. It is a well known fact that any signal that appears widespread in a given domain (e.g., image domain) is compact in the Fourier domain (uv-grid domain), i.e., RFI sources that appear as large scale structures (e.g., stripes) in images can be located and flagged using the uv-grid data set. We developed several automated uv-grid based flagging algorithms to detect and excise RFI. These algorithms will be discussed, and results of applying them to measurement sets will be presented.

  1. Automated power management and control

    NASA Technical Reports Server (NTRS)

    Dolce, James L.

    1991-01-01

    A comprehensive automation design is being developed for Space Station Freedom's electric power system. A joint effort between NASA's Office of Aeronautics and Exploration Technology and NASA's Office of Space Station Freedom, it strives to increase station productivity by applying expert systems and conventional algorithms to automate power system operation. The initial station operation will use ground-based dispatches to perform the necessary command and control tasks. These tasks constitute planning and decision-making activities that strive to eliminate unplanned outages. We perceive an opportunity to help these dispatchers make fast and consistent on-line decisions by automating three key tasks: failure detection and diagnosis, resource scheduling, and security analysis. Expert systems will be used for the diagnostics and for the security analysis; conventional algorithms will be used for the resource scheduling.

  2. Prediction of a Flash Flood in Complex Terrain. Part II: A Comparison of Flood Discharge Simulations Using Rainfall Input from Radar, a Dynamic Model, and an Automated Algorithmic System.

    NASA Astrophysics Data System (ADS)

    Yates, David N.; Warner, Thomas T.; Leavesley, George H.

    2000-06-01

    Three techniques were employed for the estimation and prediction of precipitation from a thunderstorm that produced a flash flood in the Buffalo Creek watershed located in the mountainous Front Range near Denver, Colorado, on 12 July 1996. The techniques included 1) quantitative precipitation estimation using the National Weather Service's Weather Surveillance Radar-1988 Doppler and the National Center for Atmospheric Research's S-band, dual-polarization radars, 2) quantitative precipitation forecasting utilizing a dynamic model, and 3) quantitative precipitation forecasting using an automated algorithmic system for tracking thunderstorms. Rainfall data provided by these various techniques at short timescales (6 min) and at fine spatial resolutions (150 m to 2 km) served as input to a distributed-parameter hydrologic model for analysis of the flash flood. The quantitative precipitation estimates from the weather radar demonstrated their ability to aid in simulating a watershed's response to precipitation forcing from small-scale, convective weather in complex terrain. That is, with the radar-based quantitative precipitation estimates employed as input, the simulated peak discharge was similar to that estimated. The dynamic model showed the most promise in providing a significant forecast lead time for this flash-flood event. The algorithmic system did not show as much skill in comparison with the dynamic model in providing precipitation forcing to the hydrologic model. The discharge forecasts based on the dynamic-model and algorithmic-system inputs point to the need to improve the ability to forecast convective storms, especially if models such as these eventually are to be used in operational flood forecasting.

  3. Habitat automation

    NASA Technical Reports Server (NTRS)

    Swab, Rodney E.

    1992-01-01

    A habitat, on either the surface of the Moon or Mars, will be designed and built with the proven technologies of that day. These technologies will be mature and readily available to the habitat designer. We believe an acceleration of the normal pace of automation would allow a habitat to be safer and more easily maintained than would be the case otherwise. This document examines the operation of a habitat and describes elements of that operation which may benefit from an increased use of automation. Research topics within the automation realm are then defined and discussed with respect to the role they can have in the design of the habitat. Problems associated with the integration of advanced technologies into real-world projects at NASA are also addressed.

  4. An automated workflow for patient-specific quality control of contour propagation.

    PubMed

    Beasley, William J; McWilliam, Alan; Slevin, Nicholas J; Mackay, Ranald I; van Herk, Marcel

    2016-12-21

    Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated-mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.

  5. Estimating travel and service times for automated route planning and service certification in municipal waste management.

    PubMed

    Ghiani, Gianpaolo; Guerrieri, Antonio; Manni, Andrea; Manni, Emanuele

    2015-12-01

    Nowadays, route planning algorithms are commonly used to generate detailed work schedules for solid waste collection vehicles. However, the reliability of such schedules relies heavily on the accuracy of a number of parameters, such as the actual service time at each collection location and the traversal times of the streets (which depend on the specific day of the week and the time of day). In this paper, we propose an automated classification and estimation algorithm that, based on Global Positioning System data collected by the fleet, estimates such parameters in a timely and accurate fashion. In particular, our approach is able to classify automatically events like stops due to traffic jams, stops at traffic lights and stops at collection sites. The system can also be used for automated fleet supervision and in order to notify on a web site whether certain services have been actually provided on a certain day, thus making waste management more accountable to citizens. An experimentation carried out in an Italian municipality shows the advantages of our approach.

  6. An automated workflow for patient-specific quality control of contour propagation

    NASA Astrophysics Data System (ADS)

    Beasley, William J.; McWilliam, Alan; Slevin, Nicholas J.; Mackay, Ranald I.; van Herk, Marcel

    2016-12-01

    Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated—mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.

  7. Supervised pixel classification for segmenting geographic atrophy in fundus autofluorescene images

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Medioni, Gerard G.; Hernandez, Matthias; Sadda, SriniVas R.

    2014-03-01

    Age-related macular degeneration (AMD) is the leading cause of blindness in people over the age of 65. Geographic atrophy (GA) is a manifestation of the advanced or late-stage of the AMD, which may result in severe vision loss and blindness. Techniques to rapidly and precisely detect and quantify GA lesions would appear to be of important value in advancing the understanding of the pathogenesis of GA and the management of GA progression. The purpose of this study is to develop an automated supervised pixel classification approach for segmenting GA including uni-focal and multi-focal patches in fundus autofluorescene (FAF) images. The image features include region wise intensity (mean and variance) measures, gray level co-occurrence matrix measures (angular second moment, entropy, and inverse difference moment), and Gaussian filter banks. A k-nearest-neighbor (k-NN) pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. A voting binary iterative hole filling filter is then applied to fill in the small holes. Sixteen randomly chosen FAF images were obtained from sixteen subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by certified graders. Two-fold cross-validation is applied for the evaluation of the classification performance. The mean Dice similarity coefficients (DSC) between the algorithm- and manually-defined GA regions are 0.84 +/- 0.06 for one test and 0.83 +/- 0.07 for the other test and the area correlations between them are 0.99 (p < 0.05) and 0.94 (p < 0.05) respectively.

  8. Semi-supervised classification via local spline regression.

    PubMed

    Xiang, Shiming; Nie, Feiping; Zhang, Changshui

    2010-11-01

    This paper presents local spline regression for semi-supervised classification. The core idea in our approach is to introduce splines developed in Sobolev space to map the data points directly to be class labels. The spline is composed of polynomials and Green's functions. It is smooth, nonlinear, and able to interpolate the scattered data points with high accuracy. Specifically, in each neighborhood, an optimal spline is estimated via regularized least squares regression. With this spline, each of the neighboring data points is mapped to be a class label. Then, the regularized loss is evaluated and further formulated in terms of class label vector. Finally, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on the labeled and unlabeled data. To achieve the goal of semi-supervised classification, an objective function is constructed by combining together the global loss of the local spline regressions and the squared errors of the class labels of the labeled data. In this way, a transductive classification algorithm is developed in which a globally optimal classification can be finally obtained. In the semi-supervised learning setting, the proposed algorithm is analyzed and addressed into the Laplacian regularization framework. Comparative classification experiments on many public data sets and applications to interactive image segmentation and image matting illustrate the validity of our method.

  9. Using Machine Learning and Natural Language Processing Algorithms to Automate the Evaluation of Clinical Decision Support in Electronic Medical Record Systems

    PubMed Central

    Szlosek, Donald A; Ferrett, Jonathan

    2016-01-01

    Introduction: As the number of clinical decision support systems (CDSSs) incorporated into electronic medical records (EMRs) increases, so does the need to evaluate their effectiveness. The use of medical record review and similar manual methods for evaluating decision rules is laborious and inefficient. The authors use machine learning and Natural Language Processing (NLP) algorithms to accurately evaluate a clinical decision support rule through an EMR system, and they compare it against manual evaluation. Methods: Modeled after the EMR system EPIC at Maine Medical Center, we developed a dummy data set containing physician notes in free text for 3,621 artificial patients records undergoing a head computed tomography (CT) scan for mild traumatic brain injury after the incorporation of an electronic best practice approach. We validated the accuracy of the Best Practice Advisories (BPA) using three machine learning algorithms—C-Support Vector Classification (SVC), Decision Tree Classifier (DecisionTreeClassifier), k-nearest neighbors classifier (KNeighborsClassifier)—by comparing their accuracy for adjudicating the occurrence of a mild traumatic brain injury against manual review. We then used the best of the three algorithms to evaluate the effectiveness of the BPA, and we compared the algorithm’s evaluation of the BPA to that of manual review. Results: The electronic best practice approach was found to have a sensitivity of 98.8 percent (96.83–100.0), specificity of 10.3 percent, PPV = 7.3 percent, and NPV = 99.2 percent when reviewed manually by abstractors. Though all the machine learning algorithms were observed to have a high level of prediction, the SVC displayed the highest with a sensitivity 93.33 percent (92.49–98.84), specificity of 97.62 percent (96.53–98.38), PPV = 50.00, NPV = 99.83. The SVC algorithm was observed to have a sensitivity of 97.9 percent (94.7–99.86), specificity 10.30 percent, PPV 7.25 percent, and NPV 99.2 percent for

  10. Cognitive Inference Device for Activity Supervision in the Elderly

    PubMed Central

    2014-01-01

    Human activity, life span, and quality of life are enhanced by innovations in science and technology. Aging individual needs to take advantage of these developments to lead a self-regulated life. However, maintaining a self-regulated life at old age involves a high degree of risk, and the elderly often fail at this goal. Thus, the objective of our study is to investigate the feasibility of implementing a cognitive inference device (CI-device) for effective activity supervision in the elderly. To frame the CI-device, we propose a device design framework along with an inference algorithm and implement the designs through an artificial neural model with different configurations, mapping the CI-device's functions to minimise the device's prediction error. An analysis and discussion are then provided to validate the feasibility of CI-device implementation for activity supervision in the elderly. PMID:25405211

  11. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    2000-01-01

    An automated propellant blending apparatus and method that uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation is discussed. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  12. Automated Propellant Blending

    NASA Technical Reports Server (NTRS)

    Hohmann, Carl W. (Inventor); Harrington, Douglas W. (Inventor); Dutton, Maureen L. (Inventor); Tipton, Billy Charles, Jr. (Inventor); Bacak, James W. (Inventor); Salazar, Frank (Inventor)

    1999-01-01

    An automated propellant blending apparatus and method uses closely metered addition of countersolvent to a binder solution with propellant particles dispersed therein to precisely control binder precipitation and particle aggregation. A profile of binder precipitation versus countersolvent-solvent ratio is established empirically and used in a computer algorithm to establish countersolvent addition parameters near the cloud point for controlling the transition of properties of the binder during agglomeration and finishing of the propellant composition particles. The system is remotely operated by computer for safety, reliability and improved product properties, and also increases product output.

  13. Automating Finance

    ERIC Educational Resources Information Center

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  14. Semi-Supervised Eigenbasis Novelty Detection

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Thompson, David R.

    2013-01-01

    Recent discoveries in high-time-resolution radio astronomy data have focused attention on a new class of events. Fast transients are rare pulses of radio frequency energy lasting from microseconds to seconds that might be produced by a variety of exotic astrophysical phenomena. For example, X-ray bursts, neutron stars, and active galactic nuclei are all possible sources of short-duration, transient radio signals. It is difficult to anticipate where such signals might appear, and they are most commonly discovered through analysis of high-time- resolution data that had been collected for other purposes. Transients are often faint and difficult to detect, so improved detection algorithms can directly benefit the science yield of all such commensal monitoring. A new detection algorithm learns a low-dimensional linear manifold for describing the normal data. High reconstruction error indicates a novel signal that does not match the patterns of normal data. One unsupervised portion of the manifold model adapts its representation in response to recent data. A second supervised portion of the model is made of a basis trained in advance using labeled examples of RFI; this prevents false positives due to these events. For a linear model, an orthonormalization operation is used to combine these bases prior to the anomaly detection decision. Another novel aspect of the approach lies in combining basis vectors learned in an unsupervised, online fashion from the data stream with supervised basis vectors learned in advance from known examples of false alarms. Adaptive, data-driven detection is achieved that is also informed by existing domain knowledge about signals that may be statistically anomalous, but are not interesting and should therefore be ignored. The method was evaluated using data from the Parkes Multibeam Survey. This data set was originally collected to search for pulsars, which are astronomical sources that emit radio pulses at regular periods. However, several

  15. A review of supervised machine learning applied to ageing research.

    PubMed

    Fabris, Fabio; Magalhães, João Pedro de; Freitas, Alex A

    2017-04-01

    Broadly speaking, supervised machine learning is the computational task of learning correlations between variables in annotated data (the training set), and using this information to create a predictive model capable of inferring annotations for new data, whose annotations are not known. Ageing is a complex process that affects nearly all animal species. This process can be studied at several levels of abstraction, in different organisms and with different objectives in mind. Not surprisingly, the diversity of the supervised machine learning algorithms applied to answer biological questions reflects the complexities of the underlying ageing processes being studied. Many works using supervised machine learning to study the ageing process have been recently published, so it is timely to review these works, to discuss their main findings and weaknesses. In summary, the main findings of the reviewed papers are: the link between specific types of DNA repair and ageing; ageing-related proteins tend to be highly connected and seem to play a central role in molecular pathways; ageing/longevity is linked with autophagy and apoptosis, nutrient receptor genes, and copper and iron ion transport. Additionally, several biomarkers of ageing were found by machine learning. Despite some interesting machine learning results, we also identified a weakness of current works on this topic: only one of the reviewed papers has corroborated the computational results of machine learning algorithms through wet-lab experiments. In conclusion, supervised machine learning has contributed to advance our knowledge and has provided novel insights on ageing, yet future work should have a greater emphasis in validating the predictions.

  16. An Effective Big Data Supervised Imbalanced Classification Approach for Ortholog Detection in Related Yeast Species

    PubMed Central

    Galpert, Deborah; del Río, Sara; Herrera, Francisco; Ancede-Gallardo, Evys; Antunes, Agostinho; Agüero-Chapin, Guillermin

    2015-01-01

    Orthology detection requires more effective scaling algorithms. In this paper, a set of gene pair features based on similarity measures (alignment scores, sequence length, gene membership to conserved regions, and physicochemical profiles) are combined in a supervised pairwise ortholog detection approach to improve effectiveness considering low ortholog ratios in relation to the possible pairwise comparison between two genomes. In this scenario, big data supervised classifiers managing imbalance between ortholog and nonortholog pair classes allow for an effective scaling solution built from two genomes and extended to other genome pairs. The supervised approach was compared with RBH, RSD, and OMA algorithms by using the following yeast genome pairs: Saccharomyces cerevisiae-Kluyveromyces lactis, Saccharomyces cerevisiae-Candida glabrata, and Saccharomyces cerevisiae-Schizosaccharomyces pombe as benchmark datasets. Because of the large amount of imbalanced data, the building and testing of the supervised model were only possible by using big data supervised classifiers managing imbalance. Evaluation metrics taking low ortholog ratios into account were applied. From the effectiveness perspective, MapReduce Random Oversampling combined with Spark SVM outperformed RBH, RSD, and OMA, probably because of the consideration of gene pair features beyond alignment similarities combined with the advances in big data supervised classification. PMID:26605337

  17. Automated Detection of Cancer Associated Genes Using a Combined Fuzzy-Rough-Set-Based F-Information and Water Swirl Algorithm of Human Gene Expression Data.

    PubMed

    Ganesh Kumar, Pugalendhi; Kavitha, Muthu Subash; Ahn, Byeong-Cheol

    2016-01-01

    This study describes a novel approach to reducing the challenges of highly nonlinear multiclass gene expression values for cancer diagnosis. To build a fruitful system for cancer diagnosis, in this study, we introduced two levels of gene selection such as filtering and embedding for selection of potential genes and the most relevant genes associated with cancer, respectively. The filter procedure was implemented by developing a fuzzy rough set (FR)-based method for redefining the criterion function of f-information (FI) to identify the potential genes without discretizing the continuous gene expression values. The embedded procedure is implemented by means of a water swirl algorithm (WSA), which attempts to optimize the rule set and membership function required to classify samples using a fuzzy-rule-based multiclassification system (FRBMS). Two novel update equations are proposed in WSA, which have better exploration and exploitation abilities while designing a self-learning FRBMS. The efficiency of our new approach was evaluated on 13 multicategory and 9 binary datasets of cancer gene expression. Additionally, the performance of the proposed FRFI-WSA method in designing an FRBMS was compared with existing methods for gene selection and optimization such as genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony algorithm (ABC) on all the datasets. In the global cancer map with repeated measurements (GCM_RM) dataset, the FRFI-WSA showed the smallest number of 16 most relevant genes associated with cancer using a minimal number of 26 compact rules with the highest classification accuracy (96.45%). In addition, the statistical validation used in this study revealed that the biological relevance of the most relevant genes associated with cancer and their linguistics detected by the proposed FRFI-WSA approach are better than those in the other methods. The simple interpretable rules with most relevant genes and effectively classified

  18. Automated Detection of Cancer Associated Genes Using a Combined Fuzzy-Rough-Set-Based F-Information and Water Swirl Algorithm of Human Gene Expression Data

    PubMed Central

    Ahn, Byeong-Cheol

    2016-01-01

    This study describes a novel approach to reducing the challenges of highly nonlinear multiclass gene expression values for cancer diagnosis. To build a fruitful system for cancer diagnosis, in this study, we introduced two levels of gene selection such as filtering and embedding for selection of potential genes and the most relevant genes associated with cancer, respectively. The filter procedure was implemented by developing a fuzzy rough set (FR)-based method for redefining the criterion function of f-information (FI) to identify the potential genes without discretizing the continuous gene expression values. The embedded procedure is implemented by means of a water swirl algorithm (WSA), which attempts to optimize the rule set and membership function required to classify samples using a fuzzy-rule-based multiclassification system (FRBMS). Two novel update equations are proposed in WSA, which have better exploration and exploitation abilities while designing a self-learning FRBMS. The efficiency of our new approach was evaluated on 13 multicategory and 9 binary datasets of cancer gene expression. Additionally, the performance of the proposed FRFI-WSA method in designing an FRBMS was compared with existing methods for gene selection and optimization such as genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony algorithm (ABC) on all the datasets. In the global cancer map with repeated measurements (GCM_RM) dataset, the FRFI-WSA showed the smallest number of 16 most relevant genes associated with cancer using a minimal number of 26 compact rules with the highest classification accuracy (96.45%). In addition, the statistical validation used in this study revealed that the biological relevance of the most relevant genes associated with cancer and their linguistics detected by the proposed FRFI-WSA approach are better than those in the other methods. The simple interpretable rules with most relevant genes and effectively classified

  19. Computerized Lab Supervision Hits Campus.

    ERIC Educational Resources Information Center

    Worthy, Ward

    1988-01-01

    Announces the incorporation of a laboratory information management system (LIMS) into the academic environment. Describes the applications of a computer-automated laboratory system at one university. Stresses the benefits to students of the use of such a system in terms of entry into the industrial environment and to professors in grading. (CW)

  20. Self-Supervised Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and

  1. Target Localization in Wireless Sensor Networks Using Online Semi-Supervised Support Vector Regression

    PubMed Central

    Yoo, Jaehyun; Kim, H. Jin

    2015-01-01

    Machine learning has been successfully used for target localization in wireless sensor networks (WSNs) due to its accurate and robust estimation against highly nonlinear and noisy sensor measurement. For efficient and adaptive learning, this paper introduces online semi-supervised support vector regression (OSS-SVR). The first advantage of the proposed algorithm is that, based on semi-supervised learning framework, it can reduce the requirement on the amount of the labeled training data, maintaining accurate estimation. Second, with an extension to online learning, the proposed OSS-SVR automatically tracks changes of the system to be learned, such as varied noise characteristics. We compare the proposed algorithm with semi-supervised manifold learning, an online Gaussian process and online semi-supervised colocalization. The algorithms are evaluated for estimating the unknown location of a mobile robot in a WSN. The experimental results show that the proposed algorithm is more accurate under the smaller amount of labeled training data and is robust to varying noise. Moreover, the suggested algorithm performs fast computation, maintaining the best localization performance in comparison with the other methods. PMID:26024420

  2. Constructing Aligned Assessments Using Automated Test Construction

    ERIC Educational Resources Information Center

    Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui

    2013-01-01

    We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…

  3. Supervised Discrete Hashing With Relaxation.

    PubMed

    Gui, Jie; Liu, Tongliang; Sun, Zhenan; Tao, Dacheng; Tan, Tieniu

    2016-12-29

    Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called ''supervised discrete hashing with relaxation'' (SDHR) based on ''supervised discrete hashing'' (SDH). SDH uses ordinary least squares regression and traditional zero-one matrix encoding of class label information as the regression target (code words), thus fixing the regression target. In SDHR, the regression target is instead optimized. The optimized regression target matrix satisfies a large margin constraint for correct classification of each example. Compared with SDH, which uses the traditional zero-one matrix, SDHR utilizes the learned regression target matrix and, therefore, more accurately measures the classification error of the regression model and is more flexible. As expected, SDHR generally outperforms SDH. Experimental results on two large-scale image data sets (CIFAR-10 and MNIST) and a large-scale and challenging face data set (FRGC) demonstrate the effectiveness and efficiency of SDHR.

  4. Subsampled Hessian Newton Methods for Supervised Learning.

    PubMed

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  5. SNMFCA: supervised NMF-based image classification and annotation.

    PubMed

    Jing, Liping; Zhang, Chao; Ng, Michael K

    2012-11-01

    In this paper, we propose a novel supervised nonnegative matrix factorization-based framework for both image classification and annotation. The framework consists of two phases: training and prediction. In the training phase, two supervised nonnegative matrix factorizations for image descriptors and annotation terms are combined to identify the latent image bases, and to represent the training images in the bases space. These latent bases can capture the representation of the images in terms of both descriptors and annotation terms. Based on the new representation of training images, classifiers can be learnt and built. In the prediction phase, a test image is first represented by the latent bases via solving a linear least squares problem, and then its class label and annotation can be predicted via the trained classifiers and the proposed annotation mapping model. In the algorithm, we develop a three-block proximal alternating nonnegative least squares algorithm to determine the latent image bases, and show its convergent property. Extensive experiments on real-world image data sets suggest that the proposed framework is able to predict the label and annotation for testing images successfully. Experimental results have also shown that our algorithm is computationally efficient and effective for image classification and annotation.

  6. Robust head pose estimation via supervised manifold learning.

    PubMed

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations.

  7. Weakly-Supervised Multimodal Kernel for Categorizing Aerial Photographs.

    PubMed

    Xia, Yingjie; Zhang, Luming; Liu, Zhenguang; Nie, Liqiang; Li, Xuelong

    2016-12-14

    Accurately distinguishing aerial photographs from different categories is a promising technique in computer vision. It can facilitate a series of applications such as video surveillance and vehicle navigation. In the paper, a new image kernel is proposed for effectively recognizing aerial photographs. The key is to encode high-level semantic cues into local image patches in a weakly-supervised way, and integrate multimodal visual features using a newly-developed hashing algorithm. The flowchart can be elaborated as follows. Given an aerial photo, we first extract a number of graphlets to describe its topological structure. For each graphlet, we utilize color and texture to capture its appearance, and a weakly-supervised algorithm to capture its semantics. Thereafter, aerial photo categorization can be naturally formulated as graphlet-to-graphlet matching. As the number of graphlets from each aerial photo is huge, to accelerate matching, we present a hashing algorithm to seamlessly fuze the multiple visual features into binary codes. Finally, an image kernel is calculated by fast matching the binary codes corresponding to each graphlet. And a multi-class SVM is learned for aerial photo categorization. We demonstrate the advantage of our proposed model by comparing it with state-of-the-art image descriptors. Moreover, an in-depth study of the descriptiveness of the hash-based graphlet is presented.

  8. ICT Strategies and Tools for the Improvement of Instructional Supervision. The Virtual Supervision

    ERIC Educational Resources Information Center

    Cano, Esteban Vazquez; Garcia, Ma. Luisa Sevillano

    2013-01-01

    This study aims to evaluate and analyze strategies, proposals, and ICT tools to promote a paradigm shift in educational supervision that enhances the schools of this century involved not only in teaching-face learning, but e-learning and blended learning. Traditional models of educational supervision do not guarantee adequate supervision of the…

  9. GENIES: gene network inference engine based on supervised analysis.

    PubMed

    Kotera, Masaaki; Yamanishi, Yoshihiro; Moriya, Yuki; Kanehisa, Minoru; Goto, Susumu

    2012-07-01

    Gene network inference engine based on supervised analysis (GENIES) is a web server to predict unknown part of gene network from various types of genome-wide data in the framework of supervised network inference. The originality of GENIES lies in the construction of a predictive model using partially known network information and in the integration of heterogeneous data with kernel methods. The GENIES server accepts any 'profiles' of genes or proteins (e.g. gene expression profiles, protein subcellular localization profiles and phylogenetic profiles) or pre-calculated gene-gene similarity matrices (or 'kernels') in the tab-delimited file format. As a training data set to learn a predictive model, the users can choose either known molecular network information in the KEGG PATHWAY database or their own gene network data. The user can also select an algorithm of supervised network inference, choose various parameters in the method, and control the weights of heterogeneous data integration. The server provides the list of newly predicted gene pairs, maps the predicted gene pairs onto the associated pathway diagrams in KEGG PATHWAY and indicates candidate genes for missing enzymes in organism-specific metabolic pathways. GENIES (http://www.genome.jp/tools/genies/) is publicly available as one of the genome analysis tools in GenomeNet.

  10. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms

    PubMed Central

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. Methods: We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusion: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel. PMID:24795875

  11. Results-Oriented Supervision (for Teachers).

    ERIC Educational Resources Information Center

    Katz, Stanley S.

    Results-Oriented Supervision (R.O.S.) is a system for teacher supervision that focuses more on instructional improvement than on teacher evaluation. The supervisor and the teacher join together to formulate teaching objectives. After a period of implementation, a postconference is held to assess and restate or renew objectives. In the school…

  12. 12 CFR 240.14 - Supervision.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... institution. A banking institution engaging in retail forex transactions shall diligently supervise the... similar function) of all retail forex accounts carried, operated, or advised by the banking institution... performing a similar function) relating to its retail forex business. (b) Supervision by officers,...

  13. 12 CFR 349.14 - Supervision.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... institution. An FDIC-supervised insured depository institution engaging in retail forex transactions shall... status or performing a similar function) of all retail forex accounts carried, operated, or advised by at... forex business. (b) Supervision by officers, employees, or agents. An officer, employee, or agent of...

  14. 12 CFR 349.14 - Supervision.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... institution. An FDIC-supervised insured depository institution engaging in retail forex transactions shall... status or performing a similar function) of all retail forex accounts carried, operated, or advised by at... forex business. (b) Supervision by officers, employees, or agents. An officer, employee, or agent of...

  15. 12 CFR 48.14 - Supervision.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....14 Supervision. (a) Supervision by the national bank. A national bank engaging in retail forex... occupying a similar status or performing a similar function) of all retail forex accounts carried, operated... persons occupying a similar status or performing a similar function) relating to its retail forex...

  16. 12 CFR 48.14 - Supervision.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....14 Supervision. (a) Supervision by the national bank. A national bank engaging in retail forex... occupying a similar status or performing a similar function) of all retail forex accounts carried, operated... persons occupying a similar status or performing a similar function) relating to its retail forex...

  17. 12 CFR 349.14 - Supervision.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... institution. An FDIC-supervised insured depository institution engaging in retail forex transactions shall... status or performing a similar function) of all retail forex accounts carried, operated, or advised by at... forex business. (b) Supervision by officers, employees, or agents. An officer, employee, or agent of...

  18. 12 CFR 48.14 - Supervision.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....14 Supervision. (a) Supervision by the national bank. A national bank engaging in retail forex... occupying a similar status or performing a similar function) of all retail forex accounts carried, operated... persons occupying a similar status or performing a similar function) relating to its retail forex...

  19. 19 CFR 19.34 - Customs supervision.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Wheat § 19.34 Customs supervision. Port directors shall exercise such supervision and control over the... imported wheat and no unauthorized mixing, blending, or commingling of such imported wheat. Importers... wheat in continuous Customs custody shall maintain such records as will enable Customs officers...

  20. 19 CFR 19.34 - Customs supervision.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Wheat § 19.34 Customs supervision. Port directors shall exercise such supervision and control over the... imported wheat and no unauthorized mixing, blending, or commingling of such imported wheat. Importers... wheat in continuous Customs custody shall maintain such records as will enable Customs officers...

  1. 19 CFR 19.34 - Customs supervision.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Wheat § 19.34 Customs supervision. Port directors shall exercise such supervision and control over the... imported wheat and no unauthorized mixing, blending, or commingling of such imported wheat. Importers... wheat in continuous Customs custody shall maintain such records as will enable Customs officers...

  2. 19 CFR 19.34 - Customs supervision.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Wheat § 19.34 Customs supervision. Port directors shall exercise such supervision and control over the... imported wheat and no unauthorized mixing, blending, or commingling of such imported wheat. Importers... wheat in continuous Customs custody shall maintain such records as will enable Customs officers...

  3. 21 CFR 640.62 - Medical supervision.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Medical supervision. 640.62 Section 640.62 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.62 Medical supervision....

  4. 21 CFR 640.62 - Medical supervision.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Medical supervision. 640.62 Section 640.62 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.62 Medical supervision....

  5. 21 CFR 640.62 - Medical supervision.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 7 2012-04-01 2012-04-01 false Medical supervision. 640.62 Section 640.62 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.62 Medical supervision....

  6. 21 CFR 640.62 - Medical supervision.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 7 2013-04-01 2013-04-01 false Medical supervision. 640.62 Section 640.62 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.62 Medical supervision....

  7. 21 CFR 640.62 - Medical supervision.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 7 2014-04-01 2014-04-01 false Medical supervision. 640.62 Section 640.62 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.62 Medical supervision....

  8. The Elements: A Model of Mindful Supervision

    ERIC Educational Resources Information Center

    Sturm, Deborah C.; Presbury, Jack; Echterling, Lennis G.

    2012-01-01

    Mindfulness, based on an ancient spiritual practice, is a core quality and way of being that can deepen and enrich the supervision of counselors. This model of mindful supervision incorporates Buddhist and Hindu conceptualizations of the roles of the five elements--space, earth, water, fire, air--as they relate to adhikara or studentship, the…

  9. Applying Services Marketing Principles to Postgraduate Supervision

    ERIC Educational Resources Information Center

    Dann, Stephen

    2008-01-01

    Purpose: The paper aims to describe the application of two key service quality frameworks for improving the delivery of postgraduate research supervision. The services quality frameworks are used to identify key areas of overlap between services marketing practice and postgraduate supervision that can be used by the supervisor to improve research…

  10. Designing High Performance Schools through Instructional Supervision.

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    This paper summarizes a new paradigm of instructional supervision, which shifts the focus from individual behavior to the improvement of work processes and social system components of the school district. The proposed paradigm, the Knowledge Work Supervision model, is derived from sociotechnical systems design theory and linked to the premise that…

  11. Experiencing Higher Degree Research Supervision as Teaching

    ERIC Educational Resources Information Center

    Bruce, Christine; Stoodley, Ian

    2013-01-01

    This article describes higher degree research supervisors' experiences of supervision as teaching. While research education is considered central to the higher degree research experience, comparatively little is known to date of the teaching lenses adopted by supervisors as they go about their supervision. We worked with 35 supervisors engaged in…

  12. Supervision as a Contested Space: A Response

    ERIC Educational Resources Information Center

    Manathunga, Catherine

    2009-01-01

    Exploring postgraduate supervision practices with supervisors is a complex and contested endeavour. The growing body of literature on approaches to working with supervisors attests to this. Unlike some areas of higher education research, studies of supervision span theoretical spectrums from liberal approaches (e.g. Ballard and Clanchy 1991; Bruce…

  13. Supervising Student Teachers Using Peer Coaching.

    ERIC Educational Resources Information Center

    Benedetti, Teresa A.; Reed, Michelle K.

    Traditional supervision of teachers in American schools is often mislabeled. In practice, it is more an exercise in administrative monitoring and evaluation instead of a method to help teachers grow and improve professionally. Clinical supervision, developed as an alternative to traditional supervisory methods, focuses on the non-evaluative use of…

  14. The School Counselor, the Cactus, and Supervision

    ERIC Educational Resources Information Center

    Boyd, John D.; Walter, Paul B.

    1975-01-01

    The authors suggest that counselor supervision is a viable way to assist school counselors in dealing with lack of professional development opportunities. Supervision can facilitate the counselor's personal and professional development and can promote counselor competencies, accountability and the improvement of guidance services and programs. (SE)

  15. Wellness Model of Supervision: A Comparative Analysis

    ERIC Educational Resources Information Center

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  16. Factors for Consideration in Supervision and Evaluation.

    ERIC Educational Resources Information Center

    Nottingham, Marv; Dawson, Jack

    In most schools, teacher evaluation and supervision are seldom mutually exclusive. This paper synthesizes research findings and capitalizes on experience concerning these sensitive processes. After discussing three basic purposes for supervision-evaluation (staff development, school improvement, and personnel decisions), the paper carefully…

  17. 19 CFR 146.3 - Customs supervision.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Customs supervision. 146.3 Section 146.3 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) FOREIGN TRADE ZONES General Provisions § 146.3 Customs supervision. (a) Assignment of...

  18. Doctoral Student Supervision in a Managerial Climate

    ERIC Educational Resources Information Center

    Cribb, Alan; Gewirtz, Sharon

    2006-01-01

    This paper is organized around a single interview with an academic colleague who was asked to reflect on his changing experience of Ph.D. supervision. Material from the interview is used to raise some questions about the nature of responsible supervision in the humanities and social sciences, and the ways in which the possibilities for responsible…

  19. 10 CFR 35.27 - Supervision.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Supervision. 35.27 Section 35.27 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL General Administrative Requirements § 35.27 Supervision. (a) A licensee that permits the receipt, possession, use, or transfer of byproduct material by an...

  20. 10 CFR 35.27 - Supervision.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Supervision. 35.27 Section 35.27 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL General Administrative Requirements § 35.27 Supervision. (a) A licensee that permits the receipt, possession, use, or transfer of byproduct material by an...

  1. 10 CFR 35.27 - Supervision.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Supervision. 35.27 Section 35.27 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL General Administrative Requirements § 35.27 Supervision. (a) A licensee that permits the receipt, possession, use, or transfer of byproduct material by an...

  2. 10 CFR 35.27 - Supervision.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Supervision. 35.27 Section 35.27 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL General Administrative Requirements § 35.27 Supervision. (a) A licensee that permits the receipt, possession, use, or transfer of byproduct material by an...

  3. A Gestalt Approach to Group Supervision

    ERIC Educational Resources Information Center

    Melnick, Joseph; Fall, Marijane

    2008-01-01

    The authors define and then describe the practice of group supervision. The role of creative experiment in assisting supervisees who perceive themselves as confused, moving in circles, or immobilized is described. Fictional case examples illustrate these issues in supervision. The authors posit the "good fit" of Gestalt theory and techniques with…

  4. Educational Supervision: Perspectives, Issues, and Controversies.

    ERIC Educational Resources Information Center

    Glanz, Jeffrey, Ed.; Neville, Richard F., Ed.

    Educational supervision has historically sought to improve the quality of teaching. This book is a text for undergraduate and graduate students who are engaged in the study of issues in educational supervision; it is a compendium of informed commentaries on current issues written by prominent scholars in the field. The first part (12 chapters)…

  5. Postgraduate Supervision and Academic Support: Students' Perceptions.

    ERIC Educational Resources Information Center

    Lessing, A. C.; Schulze, S.

    2002-01-01

    Surveyed graduate distance education students at the University of South Africa about faculty supervision, including individual styles of supervision, the most rewarding and frustrating aspects of their studies, and their recommendations. Found that students' expectations are not entirely met. Offers recommendations. (EV)

  6. The Management and Administration of Instructional Supervision.

    ERIC Educational Resources Information Center

    Ortiz, Flora Ida

    Based on the view that instructional supervision means the improvement of both teachers and instruction, this paper attempts to show how instructional supervision is managed and administered by one principal. Data were gathered through observation and interviews conducted for a related study by the author on teacher rewards. The bulk of the paper…

  7. School Counselor Perceptions of Administrative Supervision Practices

    ERIC Educational Resources Information Center

    Eddings, Geoffrey Creighton

    2012-01-01

    This study examined the perceptions of school counselors regarding administrative supervision practices in K-12 public schools in South Carolina. Specifically, the goal was to gain insight into how school counselors view current building-level supervision practices in relation to Pajak's Twelve Dimensions of Supervisory Practice, as well as how…

  8. The Agile Approach with Doctoral Dissertation Supervision

    ERIC Educational Resources Information Center

    Tengberg, Lars Göran Wallgren

    2015-01-01

    Several research findings conclude that many doctoral students fail to complete their studies within the allowable time frame, in part because of problems related to the research and supervision process. Surveys show that most doctoral students are generally satisfied with their dissertation supervision. However, these surveys also reveal some…

  9. A Working System of School Counselor Supervision

    ERIC Educational Resources Information Center

    Somody, Catherine; Henderson, Patricia; Cook, Katrina; Zambrano, Elias

    2008-01-01

    School counselors acknowledge the need for supervision but rarely receive it. This article describes the counselor performance improvement system in one school district. Supervision is embedded in a process that assesses counselors' levels of professionalism on a matrix of competence and commitment. Administrative and clinical supervisors…

  10. Evaluation of a generalizable approach to clinical information retrieval using the automated retrieval console (ARC)

    PubMed Central

    Nguyen, Thien M; Farwell, Wildon R; Chen, Yongming; Fitzmeyer, Felicia; Harris, Owen M; Fiore, Louis D

    2010-01-01

    Reducing custom software development effort is an important goal in information retrieval (IR). This study evaluated a generalizable approach involving with no custom software or rules development. The study used documents “consistent with cancer” to evaluate system performance in the domains of colorectal (CRC), prostate (PC), and lung (LC) cancer. Using an end-user-supplied reference set, the automated retrieval console (ARC) iteratively calculated performance of combinations of natural language processing-derived features and supervised classification algorithms. Training and testing involved 10-fold cross-validation for three sets of 500 documents each. Performance metrics included recall, precision, and F-measure. Annotation time for five physicians was also measured. Top performing algorithms had recall, precision, and F-measure values as follows: for CRC, 0.90, 0.92, and 0.89, respectively; for PC, 0.97, 0.95, and 0.94; and for LC, 0.76, 0.80, and 0.75. In all but one case, conditional random fields outperformed maximum entropy-based classifiers. Algorithms had good performance without custom code or rules development, but performance varied by specific application. PMID:20595303

  11. Phenotype classification of zebrafish embryos by supervised learning.

    PubMed

    Jeanray, Nathalie; Marée, Raphaël; Pruvot, Benoist; Stern, Olivier; Geurts, Pierre; Wehenkel, Louis; Muller, Marc

    2015-01-01

    Zebrafish is increasingly used to assess biological properties of chemical substances and thus is becoming a specific tool for toxicological and pharmacological studies. The effects of chemical substances on embryo survival and development are generally evaluated manually through microscopic observation by an expert and documented by several typical photographs. Here, we present a methodology to automatically classify brightfield images of wildtype zebrafish embryos according to their defects by using an image analysis approach based on supervised machine learning. We show that, compared to manual classification, automatic classification results in 90 to 100% agreement with consensus voting of biological experts in nine out of eleven considered defects in 3 days old zebrafish larvae. Automation of the analysis and classification of zebrafish embryo pictures reduces the workload and time required for the biological expert and increases the reproducibility and objectivity of this classification.

  12. Comparative study of public-domain supervised machine-learning accuracy on the UCI database

    NASA Astrophysics Data System (ADS)

    Eklund, Peter W.

    1999-02-01

    This paper surveys public domain supervised learning algorithms and performs accuracy (error rate) analysis of their classification performance on unseen instances for twenty-nine of the University of California at Irvine machine learning datasets. The learning algorithms represent three types of classifiers: decision trees, neural networks and rule-based classifiers. The study performs data analysis and examines the effect of irrelevant attributes to explain the performance characteristics of the learning algorithms. The survey concludes with some general recommendations about the selection of public domain machine-learning algorithms relative to the properties of the data examined.

  13. Sparse Markov chain-based semi-supervised multi-instance multi-label method for protein function prediction.

    PubMed

    Han, Chao; Chen, Jian; Wu, Qingyao; Mu, Shuai; Min, Huaqing

    2015-10-01

    Automated assignment of protein function has received considerable attention in recent years for genome-wide study. With the rapid accumulation of genome sequencing data produced by high-throughput experimental techniques, the process of manually predicting functional properties of proteins has become increasingly cumbersome. Such large genomics data sets can only be annotated computationally. However, automated assignment of functions to unknown protein is challenging due to its inherent difficulty and complexity. Previous studies have revealed that solving problems involving complicated objects with multiple semantic meanings using the multi-instance multi-label (MIML) framework is effective. For the protein function prediction problems, each protein object in nature may associate with distinct structural units (instances) and multiple functional properties (class labels) where each unit is described by an instance and each functional property is considered as a class label. Thus, it is convenient and natural to tackle the protein function prediction problem by using the MIML framework. In this paper, we propose a sparse Markov chain-based semi-supervised MIML method, called Sparse-Markov. A sparse transductive probability graph is constructed to encode the affinity information of the data based on ensemble of Hausdorff distance metrics. Our goal is to exploit the affinity between protein objects in the sparse transductive probability graph to seek a sparse steady state probability of the Markov chain model to do protein function prediction, such that two proteins are given similar functional labels if they are close to each other in terms of an ensemble Hausdorff distance in the graph. Experimental results on seven real-world organism data sets covering three biological domains show that our proposed Sparse-Markov method is able to achieve better performance than four state-of-the-art MIML learning algorithms.

  14. Automated labeling in document images

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2000-12-01

    The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.

  15. Advances in projection of climate change impacts using supervised nonlinear dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali

    2017-02-01

    One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.

  16. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  17. Space power subsystem automation technology

    NASA Technical Reports Server (NTRS)

    Graves, J. R. (Compiler)

    1982-01-01

    The technology issues involved in power subsystem automation and the reasonable objectives to be sought in such a program were discussed. The complexities, uncertainties, and alternatives of power subsystem automation, along with the advantages from both an economic and a technological perspective were considered. Whereas most spacecraft power subsystems now use certain automated functions, the idea of complete autonomy for long periods of time is almost inconceivable. Thus, it seems prudent that the technology program for power subsystem automation be based upon a growth scenario which should provide a structured framework of deliberate steps to enable the evolution of space power subsystems from the current practice of limited autonomy to a greater use of automation with each step being justified on a cost/benefit basis. Each accomplishment should move toward the objectives of decreased requirement for ground control, increased system reliability through onboard management, and ultimately lower energy cost through longer life systems that require fewer resources to operate and maintain. This approach seems well-suited to the evolution of more sophisticated algorithms and eventually perhaps even the use of some sort of artificial intelligence. Multi-hundred kilowatt systems of the future will probably require an advanced level of autonomy if they are to be affordable and manageable.

  18. Automated computation of arbor densities: a step toward identifying neuronal cell types

    PubMed Central

    Sümbül, Uygar; Zlateski, Aleksandar; Vishwanathan, Ashwin; Masland, Richard H.; Seung, H. Sebastian

    2014-01-01

    The shape and position of a neuron convey information regarding its molecular and functional identity. The identification of cell types from structure, a classic method, relies on the time-consuming step of arbor tracing. However, as genetic tools and imaging methods make data-driven approaches to neuronal circuit analysis feasible, the need for automated processing increases. Here, we first establish that mouse retinal ganglion cell types can be as precise about distributing their arbor volumes across the inner plexiform layer as they are about distributing the skeletons of the arbors. Then, we describe an automated approach to computing the spatial distribution of the dendritic arbors, or arbor density, with respect to a global depth coordinate based on this observation. Our method involves three-dimensional reconstruction of neuronal arbors by a supervised machine learning algorithm, post-processing of the enhanced stacks to remove somata and isolate the neuron of interest, and registration of neurons to each other using automatically detected arbors of the starburst amacrine interneurons as fiducial markers. In principle, this method could be generalizable to other structures of the CNS, provided that they allow sparse labeling of the cells and contain a reliable axis of spatial reference. PMID:25505389

  19. Intelligent Case Based Decision Support System for Online Diagnosis of Automated Production System

    NASA Astrophysics Data System (ADS)

    Ben Rabah, N.; Saddem, R.; Ben Hmida, F.; Carre-Menetrier, V.; Tagina, M.

    2017-01-01

    Diagnosis of Automated Production System (APS) is a decision-making process designed to detect, locate and identify a particular failure caused by the control law. In the literature, there are three major types of reasoning for industrial diagnosis: the first is model-based, the second is rule-based and the third is case-based. The common and major limitation of the first and the second reasonings is that they do not have automated learning ability. This paper presents an interactive and effective Case Based Decision Support System for online Diagnosis (CB-DSSD) of an APS. It offers a synergy between the Case Based Reasoning (CBR) and the Decision Support System (DSS) in order to support and assist Human Operator of Supervision (HOS) in his/her decision process. Indeed, the experimental evaluation performed on an Interactive Training System for PLC (ITS PLC) that allows the control of a Programmable Logic Controller (PLC), simulating sensors or/and actuators failures and validating the control algorithm through a real time interactive experience, showed the efficiency of our approach.

  20. "First generation" automated DNA sequencing technology.

    PubMed

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines.

  1. Semi-Supervised Video Segmentation Using Tree Structured Graphical Models.

    PubMed

    Badrinarayanan, Vijay; Budvytis, Ignas; Cipolla, Roberto

    2013-03-06

    We present a novel patch-based probabilistic graphical model for semi-supervised video segmentation. At the heart of our model is a temporal tree structure which links patches in adjacent frames through the video sequence. This permits exact inference of pixel labels without resorting to traditional short time-window based video processing or instantaneous decision making. The input to our algorithm are labelled key frame(s) of a video sequence and the output is pixel-wise labels along with their confidences. We propose an efficient inference scheme that performs exact inference over the temporal tree, and optionally a per frame label smoothing step using loopy BP, to estimate pixel-wise labels and their posteriors. These posteriors are used to learn pixel unaries by training a Random Decision Forest in a semi-supervised manner. These unaries are used in a second iteration of label inference to improve the segmentation quality. We demonstrate the efficacy of our proposed algorithm using several qualitative and quantitative tests on both foreground/background and multi-class video segmentation problems using publicly available and our own datasets.

  2. Semi-supervised video segmentation using tree structured graphical models.

    PubMed

    Badrinarayanan, Vijay; Budvytis, Ignas; Cipolla, Roberto

    2013-11-01

    We present a novel patch-based probabilistic graphical model for semi-supervised video segmentation. At the heart of our model is a temporal tree structure that links patches in adjacent frames through the video sequence. This permits exact inference of pixel labels without resorting to traditional short time window-based video processing or instantaneous decision making. The input to our algorithm is labeled key frame(s) of a video sequence and the output is pixel-wise labels along with their confidences. We propose an efficient inference scheme that performs exact inference over the temporal tree, and optionally a per frame label smoothing step using loopy BP, to estimate pixel-wise labels and their posteriors. These posteriors are used to learn pixel unaries by training a Random Decision Forest in a semi-supervised manner. These unaries are used in a second iteration of label inference to improve the segmentation quality. We demonstrate the efficacy of our proposed algorithm using several qualitative and quantitative tests on both foreground/background and multiclass video segmentation problems using publicly available and our own datasets.

  3. Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression.

    PubMed

    Zhen, Xiantong; Yu, Mengyang; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2016-06-08

    Multioutput regression has recently shown great ability to solve challenging problems in both computer vision and medical image analysis. However, due to the huge image variability and ambiguity, it is fundamentally challenging to handle the highly complex input-target relationship of multioutput regression, especially with indiscriminate high-dimensional representations. In this paper, we propose a novel supervised descriptor learning (SDL) algorithm for multioutput regression, which can establish discriminative and compact feature representations to improve the multivariate estimation performance. The SDL is formulated as generalized low-rank approximations of matrices with a supervised manifold regularization. The SDL is able to simultaneously extract discriminative features closely related to multivariate targets and remove irrelevant and redundant information by transforming raw features into a new low-dimensional space aligned to targets. The achieved discriminative while compact descriptor largely reduces the variability and ambiguity for multioutput regression, which enables more accurate and efficient multivariate estimation. We conduct extensive evaluation of the proposed SDL on both synthetic data and real-world multioutput regression tasks for both computer vision and medical image analysis. Experimental results have shown that the proposed SDL can achieve high multivariate estimation accuracy on all tasks and largely outperforms the algorithms in the state of the arts. Our method establishes a novel SDL framework for multioutput regression, which can be widely used to boost the performance in different applications.

  4. Providing effective supervision in clinical neuropsychology.

    PubMed

    Stucky, Kirk J; Bush, Shane; Donders, Jacobus

    2010-01-01

    A specialty like clinical neuropsychology is shaped by its selection of trainees, educational standards, expected competencies, and the structure of its training programs. The development of individual competency in this specialty is dependent to a considerable degree on the provision of competent supervision to its trainees. In clinical neuropsychology, as in other areas of professional health-service psychology, supervision is the most frequently used method for teaching a variety of skills, including assessment, report writing, differential diagnosis, and treatment. Although much has been written about the provision of quality supervision in clinical and counseling psychology, very little published guidance is available regarding the teaching and provision of supervision in clinical neuropsychology. The primary focus of this article is to provide a framework and guidance for the development of suggested competency standards for training of neuropsychological supervisors, particularly at the residency level. In this paper we outline important components of supervision for neuropsychology trainees and suggest ways in which clinicians can prepare for supervisory roles. Similar to Falender and Shafranske (2004), we propose a competency-based approach to supervision that advocates for a science-informed, formalized, and objective process that clearly delineates the competencies required for good supervisory practice. As much as possible, supervisory competencies are related to foundational and functional competencies in professional psychology, as well as recent legislative initiatives mandating training in supervision. It is our hope that this article will foster further discussion regarding this complex topic, and eventually enhance training in clinical neuropsychology.

  5. Label Ranking Algorithms: A Survey

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar; Gärtner, Thomas

    Label ranking is a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes several supervised learning problems, such as multiclass prediction, multilabel classification, and hierarchical classification. Unsurprisingly, there exists a plethora of label ranking algorithms in the literature due, in part, to this versatile nature of the problem. In this paper, we survey these algorithms.

  6. Dynamics of photospheric bright points in G-band derived from two fully automated algorithms. (Slovak Title: Dynamika fotosférických jasných bodov v G-páse odvodená použitím dvoch plne automatických algoritmov)

    NASA Astrophysics Data System (ADS)

    Bodnárová, M.; Rybák, J.; Hanslmeier, A.; Utz, D.

    2010-12-01

    Concentrations of small-scale magnetic field in the solar photosphere can be identified in the G-band of the solar spectrum as bright points. Studying the dynamics of the bright points in the G-band (BPGBs) can also help in addressing many issues related to the problem of the solar corona heating. In this work, we have used a set of 142 specled images in the G-band taken by the Dutch Open Telescope (DOT) on 19 October 2005 to make a comparison of two fully automated algorithms identifying BPGBs: an algorithm developed by Utz et al. (2009, 2010), and an algorithm developed following the work of Berger et al. (1995, 1998). We then followed in time and space motion of the BPGBs identified by both algorithms and constructed the distributions of their lifetimes, sizes and speeds. The results show that both algorithms give very similar results for the BPGB lifetimes and speeds, but their results vary significantly for the sizes of the identified BPGBs. This difference is due to the fact that in the case of the Berger et al. identification algorithm no additional criteria were applied to constrain the allowed BPGB sizes. As a result in further studies of the BPGB dynamics we will prefer to use the Utz algorithm to identify and track BPGBs.

  7. Automated Cryocooler Monitor and Control System Software

    NASA Technical Reports Server (NTRS)

    Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad

    2011-01-01

    This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.

  8. Automated System for Early Breast Cancer Detection in Mammograms

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-01-01

    The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.

  9. Automated office blood pressure.

    PubMed

    Myers, Martin G; Godwin, Marshall

    2012-05-01

    Manual blood pressure (BP) is gradually disappearing from clinical practice with the mercury sphygmomanometer now considered to be an environmental hazard. Manual BP is also subject to measurement error on the part of the physician/nurse and patient-related anxiety which can result in poor quality BP measurements and office-induced (white coat) hypertension. Automated office (AO) BP with devices such as the BpTRU (BpTRU Medical Devices, Coquitlam, BC) has already replaced conventional manual BP in many primary care practices in Canada and has also attracted interest in other countries where research studies using AOBP have been undertaken. The basic principles of AOBP include multiple readings taken with a fully automated recorder with the patient resting alone in a quiet room. When these principles are followed, office-induced hypertension is eliminated and AOBP exhibits a much stronger correlation with the awake ambulatory BP as compared with routine manual BP measurements. Unlike routine manual BP, AOBP correlates as well with left ventricular mass as does the awake ambulatory BP. AOBP also simplifies the definition of hypertension in that the cut point for a normal AOBP (< 135/85 mm Hg) is the same as for the awake ambulatory BP and home BP. This article summarizes the currently available evidence supporting the use of AOBP in routine clinical practice and proposes an algorithm in which AOBP replaces manual BP for the diagnosis and management of hypertension.

  10. Agile automated vision

    NASA Astrophysics Data System (ADS)

    Fandrich, Juergen; Schmitt, Lorenz A.

    1994-11-01

    The microelectronic industry is a protagonist in driving automated vision to new paradigms. Today semiconductor manufacturers use vision systems quite frequently in their fabs in the front-end process. In fact, the process depends on reliable image processing systems. In the back-end process, where ICs are assembled and packaged, today vision systems are only partly used. But in the next years automated vision will become compulsory for the back-end process as well. Vision will be fully integrated into every IC package production machine to increase yields and reduce costs. Modem high-speed material processing requires dedicated and efficient concepts in image processing. But the integration of various equipment in a production plant leads to unifying handling of data flow and interfaces. Only agile vision systems can act with these contradictions: fast, reliable, adaptable, scalable and comprehensive. A powerful hardware platform is a unneglectable requirement for the use of advanced and reliable, but unfortunately computing intensive image processing algorithms. The massively parallel SIMD hardware product LANTERN/VME supplies a powerful platform for existing and new functionality. LANTERN/VME is used with a new optical sensor for IC package lead inspection. This is done in 3D, including horizontal and coplanarity inspection. The appropriate software is designed for lead inspection, alignment and control tasks in IC package production and handling equipment, like Trim&Form, Tape&Reel and Pick&Place machines.

  11. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  12. Automated Standard Hazard Tool

    NASA Technical Reports Server (NTRS)

    Stebler, Shane

    2014-01-01

    The current system used to generate standard hazard reports is considered cumbersome and iterative. This study defines a structure for this system's process in a clear, algorithmic way so that standard hazard reports and basic hazard analysis may be completed using a centralized, web-based computer application. To accomplish this task, a test server is used to host a prototype of the tool during development. The prototype is configured to easily integrate into NASA's current server systems with minimal alteration. Additionally, the tool is easily updated and provides NASA with a system that may grow to accommodate future requirements and possibly, different applications. Results of this project's success are outlined in positive, subjective reviews complete by payload providers and NASA Safety and Mission Assurance personnel. Ideally, this prototype will increase interest in the concept of standard hazard automation and lead to the full-scale production of a user-ready application.

  13. Out-of-Sample Generalizations for Supervised Manifold Learning for Classification

    NASA Astrophysics Data System (ADS)

    Vural, Elif; Guillemot, Christine

    2016-03-01

    Supervised manifold learning methods for data classification map data samples residing in a high-dimensional ambient space to a lower-dimensional domain in a structure-preserving way, while enhancing the separation between different classes in the learned embedding. Most nonlinear supervised manifold learning methods compute the embedding of the manifolds only at the initially available training points, while the generalization of the embedding to novel points, known as the out-of-sample extension problem in manifold learning, becomes especially important in classification applications. In this work, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function (RBF) interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with a progressive procedure. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.

  14. Using a Mixed Model to Explore Evaluation Criteria for Bank Supervision: A Banking Supervision Law Perspective

    PubMed Central

    Tsai, Sang-Bing; Chen, Kuan-Yu; Zhao, Hongrui; Wei, Yu-Min; Wang, Cheng-Kuang; Zheng, Yuxiang; Chang, Li-Chung; Wang, Jiangtao

    2016-01-01

    Financial supervision means that monetary authorities have the power to supervise and manage financial institutions according to laws. Monetary authorities have this power because of the requirements of improving financial services, protecting the rights of depositors, adapting to industrial development, ensuring financial fair trade, and maintaining stable financial order. To establish evaluation criteria for bank supervision in China, this study integrated fuzzy theory and the decision making trial and evaluation laboratory (DEMATEL) and proposes a fuzzy-DEMATEL model. First, fuzzy theory was applied to examine bank supervision criteria and analyze fuzzy semantics. Second, the fuzzy-DEMATEL model was used to calculate the degree to which financial supervision criteria mutually influenced one another and their causal relationship. Finally, an evaluation criteria model for evaluating bank and financial supervision was established. PMID:27992449

  15. Using a Mixed Model to Explore Evaluation Criteria for Bank Supervision: A Banking Supervision Law Perspective.

    PubMed

    Tsai, Sang-Bing; Chen, Kuan-Yu; Zhao, Hongrui; Wei, Yu-Min; Wang, Cheng-Kuang; Zheng, Yuxiang; Chang, Li-Chung; Wang, Jiangtao

    2016-01-01

    Financial supervision means that monetary authorities have the power to supervise and manage financial institutions according to laws. Monetary authorities have this power because of the requirements of improving financial services, protecting the rights of depositors, adapting to industrial development, ensuring financial fair trade, and maintaining stable financial order. To establish evaluation criteria for bank supervision in China, this study integrated fuzzy theory and the decision making trial and evaluation laboratory (DEMATEL) and proposes a fuzzy-DEMATEL model. First, fuzzy theory was applied to examine bank supervision criteria and analyze fuzzy semantics. Second, the fuzzy-DEMATEL model was used to calculate the degree to which financial supervision criteria mutually influenced one another and their causal relationship. Finally, an evaluation criteria model for evaluating bank and financial supervision was established.

  16. Toward automated detection of malignant melanoma

    NASA Astrophysics Data System (ADS)

    Huang, Billy; Gareau, Daniel S.

    2009-02-01

    In vivo reflectance confocal microscopy shows promise for the early detection of malignant melanoma (MM). Two hallmarks of MM have been identified: the presence of pagetoid melanocytes in the epidermis and the breakdown of the dermal papillae. For detection of MM, these features must be identified qualitatively by the clinician and qualitatively through automated pattern recognition. A machine vision algorithm was developed for automated detection. The algorithm detected pagetoid melanocytes and breakdown of the dermal/epidermal junction in a pre-selected set of five MMs and five benign nevi for correct diagnosis.

  17. Incremental multi-class semi-supervised clustering regularized by Kalman filtering.

    PubMed

    Mehrkanoon, Siamak; Agudelo, Oscar Mauricio; Suykens, Johan A K

    2015-11-01

    This paper introduces an on-line semi-supervised learning algorithm formulated as a regularized kernel spectral clustering (KSC) approach. We consider the case where new data arrive sequentially but only a small fraction of it is labeled. The available labeled data act as prototypes and help to improve the performance of the algorithm to estimate the labels of the unlabeled data points. We adopt a recently proposed multi-class semi-supervised KSC based algorithm (MSS-KSC) and make it applicable for on-line data clustering. Given a few user-labeled data points the initial model is learned and then the class membership of the remaining data points in the current and subsequent time instants are estimated and propagated in an on-line fashion. The update of the memberships is carried out mainly using the out-of-sample extension property of the model. Initially the algorithm is tested on computer-generated data sets, then we show that video segmentation can be cast as a semi-supervised learning problem. Furthermore we show how the tracking capabilities of the Kalman filter can be used to provide the labels of objects in motion and thus regularizing the solution obtained by the MSS-KSC algorithm. In the experiments, we demonstrate the performance of the proposed method on synthetic data sets and real-life videos where the clusters evolve in a smooth fashion over time.

  18. Maintaining professional resilience through group restorative supervision.

    PubMed

    Wallbank, Sonya

    2013-08-01

    Restorative clinical supervision has been delivered to over 2,500 professionals and has shown to be highly effective in reducing burnout, stress and increasing compassion satisfaction. Demand for the programme has shown that a sustainable model of implementation is needed for organisations who may not be able to invest in continued individual sessions. Following the initial six sessions, group restorative supervision has been developed and this paper reports on the programme's success in maintaining and continuing to improve compassion satisfaction, stress and burnout through the process of restorative group supervision. This means that organisations can continue to maintain the programme once the initial training has been completed and have confidence within the restorative group supervision to support professionals in managing the emotional demands of their role. The restorative groups have also had inadvertent positive benefits in workplace functioning. The paper outlines how professionals have been able to use this learning to support them in being more effective.

  19. 17 CFR 5.21 - Supervision.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... who has no supervisory duties, must diligently supervise the handling by its partners, officers... its partners, officers, employees and agents (or persons occupying a similar status or performing a similar function) relating to its business as a Commission registrant....

  20. 17 CFR 166.3 - Supervision.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... duties, must diligently supervise the handling by its partners, officers, employees and agents (or... carried, operated, advised or introduced by the registrant and all other activities of its partners...) relating to its business as a Commission registrant....

  1. “SmartMonitor” — An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation

    PubMed Central

    Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław

    2014-01-01

    “SmartMonitor” is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the “SmartMonitor” system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons. PMID:24905854

  2. "SmartMonitor"--an intelligent security system for the protection of individuals and small properties with the possibility of home automation.

    PubMed

    Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław

    2014-06-05

    "SmartMonitor" is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the "SmartMonitor" system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.

  3. Comparison Between Supervised and Unsupervised Classifications of Neuronal Cell Types: A Case Study

    PubMed Central

    Guerra, Luis; McGarry, Laura M; Robles, Víctor; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael

    2011-01-01

    In the study of neural circuits, it becomes essential to discern the different neuronal cell types that build the circuit. Traditionally, neuronal cell types have been classified using qualitative descriptors. More recently, several attempts have been made to classify neurons quantitatively, using unsupervised clustering methods. While useful, these algorithms do not take advantage of previous information known to the investigator, which could improve the classification task. For neocortical GABAergic interneurons, the problem to discern among different cell types is particularly difficult and better methods are needed to perform objective classifications. Here we explore the use of supervised classification algorithms to classify neurons based on their morphological features, using a database of 128 pyramidal cells and 199 interneurons from mouse neocortex. To evaluate the performance of different algorithms we used, as a “benchmark,” the test to automatically distinguish between pyramidal cells and interneurons, defining “ground truth” by the presence or absence of an apical dendrite. We compared hierarchical clustering with a battery of different supervised classification algorithms, finding that supervised classifications outperformed hierarchical clustering. In addition, the selection of subsets of distinguishing features enhanced the classification accuracy for both sets of algorithms. The analysis of selected variables indicates that dendritic features were most useful to distinguish pyramidal cells from interneurons when compared with somatic and axonal morphological variables. We conclude that supervised classification algorithms are better matched to the general problem of distinguishing neuronal cell types when some information on these cell groups, in our case being pyramidal or interneuron, is known a priori. As a spin-off of this methodological study, we provide several methods to automatically distinguish neocortical pyramidal cells from

  4. Comparison between supervised and unsupervised classifications of neuronal cell types: a case study.

    PubMed

    Guerra, Luis; McGarry, Laura M; Robles, Víctor; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael

    2011-01-01

    In the study of neural circuits, it becomes essential to discern the different neuronal cell types that build the circuit. Traditionally, neuronal cell types have been classified using qualitative descriptors. More recently, several attempts have been made to classify neurons quantitatively, using unsupervised clustering methods. While useful, these algorithms do not take advantage of previous information known to the investigator, which could improve the classification task. For neocortical GABAergic interneurons, the problem to discern among different cell types is particularly difficult and better methods are needed to perform objective classifications. Here we explore the use of supervised classification algorithms to classify neurons based on their morphological features, using a database of 128 pyramidal cells and 199 interneurons from mouse neocortex. To evaluate the performance of different algorithms we used, as a "benchmark," the test to automatically distinguish between pyramidal cells and interneurons, defining "ground truth" by the presence or absence of an apical dendrite. We compared hierarchical clustering with a battery of different supervised classification algorithms, finding that supervised classifications outperformed hierarchical clustering. In addition, the selection of subsets of distinguishing features enhanced the classification accuracy for both sets of algorithms. The analysis of selected variables indicates that dendritic features were most useful to distinguish pyramidal cells from interneurons when compared with somatic and axonal morphological variables. We conclude that supervised classification algorithms are better matched to the general problem of distinguishing neuronal cell types when some information on these cell groups, in our case being pyramidal or interneuron, is known a priori. As a spin-off of this methodological study, we provide several methods to automatically distinguish neocortical pyramidal cells from interneurons

  5. Supervised, semi-supervised and unsupervised inference of gene regulatory networks.

    PubMed

    Maetschke, Stefan R; Madhamshettiwar, Piyush B; Davis, Melissa J; Ragan, Mark A

    2014-03-01

    Inference of gene regulatory network from expression data is a challenging task. Many methods have been developed to this purpose but a comprehensive evaluation that covers unsupervised, semi-supervised and supervised methods, and provides guidelines for their practical application, is lacking. We performed an extensive evaluation of inference methods on simulated and experimental expression data. The results reveal low prediction accuracies for unsupervised techniques with the notable exception of the Z-SCORE method on knockout data. In all other cases, the supervised approach achieved the highest accuracies and even in a semi-supervised setting with small numbers of only positive samples, outperformed the unsupervised techniques.

  6. Implementation of group supervision in child welfare: findings from Arizona's Supervision Circle Project.

    PubMed

    Lietz, Cynthia A

    2008-01-01

    The process of supervision plays an important role in developing the skills necessary to respond effectively to reports of child maltreatment. Specifically, educational supervision prompting discussion and critical thinking can enhance the analytic skills needed to consider the complexity commonly found in child welfare practice. To this end, group supervision was implemented with supervisors in Arizona to enrich supervisory dialog to better prepare for the unique and often unexpected challenges of child welfare. Post-test data collected from participants suggest group supervision may be one way the field of child protection can enhance critical thinking.

  7. On psychoanalytic supervision as signature pedagogy.

    PubMed

    Watkins, C Edward

    2014-04-01

    What is signature pedagogy in psychoanalytic education? This paper examines that question, considering why psychoanalytic supervision best deserves that designation. In focusing on supervision as signature pedagogy, I accentuate its role in building psychoanalytic habits of mind, habits of hand, and habits of heart, and transforming theory and self-knowledge into practical product. Other facets of supervision as signature pedagogy addressed in this paper include its features of engagement, uncertainty, formation, and pervasiveness, as well as levels of surface, deep, and implicit structure. Epistemological, ontological, and axiological in nature, psychoanalytic supervision engages trainees in learning to do, think, and value what psychoanalytic practitioners in the field do, think, and value: It is, most fundamentally, professional preparation for competent, "good work." In this paper, effort is made to shine a light on and celebrate the pivotal role of supervision in "making" or developing budding psychoanalysts and psychoanalytic psychotherapists. Now over a century old, psychoanalytic supervision remains unparalleled in (1) connecting and integrating conceptualization and practice, (2) transforming psychoanalytic theory and self-knowledge into an informed analyzing instrument, and (3) teaching, transmitting, and perpetuating the traditions, practice, and culture of psychoanalytic treatment.

  8. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system

  9. Automated security response robot

    NASA Astrophysics Data System (ADS)

    Ciccimaro, Dominic A.; Everett, Hobart R.; Gilbreath, Gary A.; Tran, Tien T.

    1999-01-01

    ROBART III is intended as an advance demonstration platform for non-lethal response measures, extending the concepts of reflexive teleoperation into the realm of coordinated weapons control in law enforcement and urban warfare scenarios. A rich mix of ultrasonic and optical proximity and range sensors facilitates remote operation in unstructured and unexplored buildings with minimal operator supervision. Autonomous navigation and mapping of interior spaces is significantly enhanced by an innovative algorithm which exploits the fact that the majority of man-made structures are characterized by parallel and orthogonal walls. Extremely robust intruder detection and assessment capabilities are achieved through intelligent fusion of a multitude of inputs form various onboard motion sensors. Intruder detection is addressed by a 360-degree staring array of passive-IR motion detectors, augmented by a number of positionable head-mounted sensors. Automatic camera tracking of a moving target is accomplished using a video line digitizer. Non-lethal response systems include a six- barrelled pneumatically-powered Gatling gun, high-powered strobe lights, and three ear-piercing 103-decibel sirens.

  10. 28 CFR 810.1 - Supervision contact requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Supervision contact requirements. 810.1... COLUMBIA COMMUNITY SUPERVISION: ADMINISTRATIVE SANCTIONS § 810.1 Supervision contact requirements. If you... District of Columbia (“CSOSA”), CSOSA will establish a supervision level for you and your minimum...

  11. 28 CFR 810.1 - Supervision contact requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Supervision contact requirements. 810.1... COLUMBIA COMMUNITY SUPERVISION: ADMINISTRATIVE SANCTIONS § 810.1 Supervision contact requirements. If you... District of Columbia (“CSOSA”), CSOSA will establish a supervision level for you and your minimum...

  12. 28 CFR 810.1 - Supervision contact requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Supervision contact requirements. 810.1... COLUMBIA COMMUNITY SUPERVISION: ADMINISTRATIVE SANCTIONS § 810.1 Supervision contact requirements. If you... District of Columbia (“CSOSA”), CSOSA will establish a supervision level for you and your minimum...

  13. 28 CFR 810.1 - Supervision contact requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Supervision contact requirements. 810.1... COLUMBIA COMMUNITY SUPERVISION: ADMINISTRATIVE SANCTIONS § 810.1 Supervision contact requirements. If you... District of Columbia (“CSOSA”), CSOSA will establish a supervision level for you and your minimum...

  14. 28 CFR 810.1 - Supervision contact requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Supervision contact requirements. 810.1... COLUMBIA COMMUNITY SUPERVISION: ADMINISTRATIVE SANCTIONS § 810.1 Supervision contact requirements. If you... District of Columbia (“CSOSA”), CSOSA will establish a supervision level for you and your minimum...

  15. Opportunities to Learn Scientific Thinking in Joint Doctoral Supervision

    ERIC Educational Resources Information Center

    Kobayashi, Sofie; Grout, Brian W.; Rump, Camilla Østerberg

    2015-01-01

    Research into doctoral supervision has increased rapidly over the last decades, yet our understanding of how doctoral students learn scientific thinking from supervision is limited. Most studies are based on interviews with little work being reported that is based on observation of actual supervision. While joint supervision has become widely…

  16. Automated electric power management and control for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Mellor, Pamela A.; Kish, James A.

    1990-01-01

    A comprehensive automation design is being developed for Space Station Freedom's electric power system. It strives to increase station productivity by applying expert systems and conventional algorithms to automate power system operation. An integrated approach to the power system command and control problem is defined and used to direct technology development in: diagnosis, security monitoring and analysis, battery management, and cooperative problem-solving for resource allocation. The prototype automated power system is developed using simulations and test-beds.

  17. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  18. Air Force construction automation/robotics

    NASA Technical Reports Server (NTRS)

    Nease, AL; Dusseault, Christopher

    1994-01-01

    The Air Force has several unique requirements that are being met through the development of construction robotic technology. The missions associated with these requirements place construction/repair equipment operators in potentially harmful situations. Additionally, force reductions require that human resources be leveraged to the maximum extent possible and that more stringent construction repair requirements push for increased automation. To solve these problems, the U.S. Air Force is undertaking a research and development effort at Tyndall AFB, FL to develop robotic teleoperation, telerobotics, robotic vehicle communications, automated damage assessment, vehicle navigation, mission/vehicle task control architecture, and associated computing environment. The ultimate goal is the fielding of robotic repair capability operating at the level of supervised autonomy. The authors of this paper will discuss current and planned efforts in construction/repair, explosive ordnance disposal, hazardous waste cleanup, fire fighting, and space construction.

  19. Towards automated biomedical ontology harmonization.

    PubMed

    Uribe, Gustavo A; Lopez, Diego M; Blobel, Bernd

    2014-01-01

    The use of biomedical ontologies is increasing, especially in the context of health systems interoperability. Ontologies are key pieces to understand the semantics of information exchanged. However, given the diversity of biomedical ontologies, it is essential to develop tools that support harmonization processes amongst them. Several algorithms and tools are proposed by computer scientist for partially supporting ontology harmonization. However, these tools face several problems, especially in the biomedical domain where ontologies are large and complex. In the harmonization process, matching is a basic task. This paper explains the different ontology harmonization processes, analyzes existing matching tools, and proposes a prototype of an ontology harmonization service. The results demonstrate that there are many open issues in the field of biomedical ontology harmonization, such as: overcoming structural discrepancies between ontologies; the lack of semantic algorithms to automate the process; the low matching efficiency of existing algorithms; and the use of domain and top level ontologies in the matching process.

  20. Automated Health Alerts Using In-Home Sensor Data for Embedded Health Assessment

    PubMed Central

    Guevara, Rainer Dane; Rantz, Marilyn

    2015-01-01

    We present an example of unobtrusive, continuous monitoring in the home for the purpose of assessing early health changes. Sensors embedded in the environment capture behavior and activity patterns. Changes in patterns are detected as potential signs of changing health. We first present results of a preliminary study investigating 22 features extracted from in-home sensor data. A 1-D alert algorithm was then implemented to generate health alerts to clinicians in a senior housing facility. Clinicians analyze each alert and provide a rating on the clinical relevance. These ratings are then used as ground truth for training and testing classifiers. Here, we present the methodology for four classification approaches that fuse multisensor data. Results are shown using embedded sensor data and health alert ratings collected on 21 seniors over nine months. The best results show similar performance for two techniques, where one approach uses only domain knowledge and the second uses supervised learning for training. Finally, we propose a health change detection model based on these results and clinical expertise. The system of in-home sensors and algorithms for automated health alerts provides a method for detecting health problems very early so that early treatment is possible. This method of passive in-home sensing alleviates compliance issues. PMID:27170900

  1. Multicultural supervision: lessons learned about an ongoing struggle.

    PubMed

    Christiansen, Abigail Tolhurst; Thomas, Volker; Kafescioglu, Nilufer; Karakurt, Gunnur; Lowe, Walter; Smith, William; Wittenborn, Andrea

    2011-01-01

    This article examines the experiences of seven diverse therapists in a supervision course as they wrestled with the real-world application of multicultural supervision. Existing literature on multicultural supervision does not address the difficulties that arise in addressing multicultural issues in the context of the supervision relationship. The experiences of six supervisory candidates and one mentoring supervisor in addressing multicultural issues in supervision are explored. Guidelines for conversations regarding multicultural issues are provided.

  2. Automated segmentation of thyroid gland on CT images with multi-atlas label fusion and random classification forest

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald

    2015-03-01

    The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.

  3. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  4. Sloan Digital Sky Survey photometric telescope automation and observing software

    SciTech Connect

    Eric H. Neilsen, Jr. et al.

    2002-10-16

    The photometric telescope (PT) provides observations necessary for the photometric calibration of the Sloan Digital Sky Survey (SDSS). Because the attention of the observing staff is occupied by the operation of the 2.5 meter telescope which takes the survey data proper, the PT must reliably take data with little supervision. In this paper we describe the PT's observing program, MOP, which automates most tasks necessary for observing. MOP's automated target selection is closely modeled on the actions a human observer might take, and is built upon a user interface that can be (and has been) used for manual operation. This results in an interface that makes it easy for an observer to track the activities of the automating procedures and intervene with minimum disturbance when necessary. MOP selects targets from the same list of standard star and calibration fields presented to the user, and chooses standard star fields covering ranges of airmass, color, and time necessary to monitor atmospheric extinction and produce a photometric solution. The software determines when additional standard star fields are unnecessary, and selects survey calibration fields according to availability and priority. Other automated features of MOP, such as maintaining the focus and keeping a night log, are also built around still functional manual interfaces, allowing the observer to be as active in observing as desired; MOP's automated features may be used as tools for manual observing, ignored entirely, or allowed to run the telescope with minimal supervision when taking routine data.

  5. Webly-supervised Fine-grained Visual Categorization via Deep Domain Adaptation.

    PubMed

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2016-12-08

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  6. Extracting microRNA-gene relations from biomedical literature using distant supervision

    PubMed Central

    Clarke, Luka A.; Couto, Francisco M.

    2017-01-01

    Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel. PMID:28263989

  7. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning.

  8. Automated Verification of IGRT-based Patient Positioning.

    PubMed

    Jiang, Xiaojun; Fox, Tim; Cordova, James S; Schreibmann, Eduard

    2015-11-08

    A system for automated quality assurance in radiotherapy of a therapist's registration was designed and tested in clinical practice. The approach compliments the clinical software's automated registration in terms of algorithm configuration and performance, and constitutes a practical approach for ensuring safe patient setups. Per our convergence analysis, evolutionary algorithms perform better in finding the global optima of the cost function with discrepancies from a deterministic optimizer seen sporadically.

  9. Automated Verification of IGRT-based Patient Positioning

    PubMed Central

    Jiang, Xiaojun; Fox, Tim; Cordova, Scott S; Schreibmann, Eduard

    2016-01-01

    A system for automated quality assurance in radiotherapy of a therapist’s registration was designed and tested in clinical practice. The approach compliments the clinical software’s automated registration in terms of algorithm configuration and performance, and constitutes a practical approach for ensuring safe patient setups. Per our convergence analysis, evolutionary algorithms perform better in finding the global optima of the cost function with discrepancies from a deterministic optimizer seen sporadically. PMID:26699548

  10. Classroom Supervision and Informal Analysis of Behavior. A Manual for Supervision.

    ERIC Educational Resources Information Center

    Hull, Ray; Hansen, John

    This manual for supervision addresses itself to those with responsibility for helping teachers develop into skilled professionals through use of a rational plan of feedback and assistance. It describes the supervision cycle and outline simple and practical techniques to collect effective data that will assist the classroom teacher. The manual has…

  11. Diversifying Supervision for Maximum Professional Growth: Is a Well-Supervised Teacher a Satisfied Teacher?

    ERIC Educational Resources Information Center

    Robinson, Sylvia G.

    This paper examines the relationship between various characterizations of the clinical supervision model and teacher job satisfaction. The first part of the paper describes teacher job satisfaction and looks at the history and meaning of clinical supervision. The next part of the paper describes Barbara Pavan's (1993) revised clinical supervision…

  12. Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...

  13. Active semi-supervised community detection based on must-link and cannot-link constraints.

    PubMed

    Cheng, Jianjun; Leng, Mingwei; Li, Longjie; Zhou, Hanhai; Chen, Xiaoyun

    2014-01-01

    Community structure detection is of great importance because it can help in discovering the relationship between the function and the topology structure of a network. Many community detection algorithms have been proposed, but how to incorporate the prior knowledge in the detection process remains a challenging problem. In this paper, we propose a semi-supervised community detection algorithm, which makes full utilization of the must-link and cannot-link constraints to guide the process of community detection and thereby extracts high-quality community structures from networks. To acquire the high-quality must-link and cannot-link constraints, we also propose a semi-supervised component generation algorithm based on active learning, which actively selects nodes with maximum utility for the proposed semi-supervised community detection algorithm step by step, and then generates the must-link and cannot-link constraints by accessing a noiseless oracle. Extensive experiments were carried out, and the experimental results show that the introduction of active learning into the problem of community detection makes a success. Our proposed method can extract high-quality community structures from networks, and significantly outperforms other comparison methods.

  14. Human-Automation Collaborative RRT for UAV Mission Path Planning

    DTIC Science & Technology

    2010-06-01

    Human-Automation Collaborative RRT for UAV Mission Path Planning by Americo De Jesus Caves S.B. in Mathematics, Massachusetts Institute of Technology...Collaborative RRT for UAV Mission Path Planning 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER...critical to understand how individual operators will be able to supervise a team of vehicles performing semi-autonomous path planning while avoiding no- y

  15. Dimensionality reduction by supervised neighbor embedding using laplacian search.

    PubMed

    Zheng, Jianwei; Zhang, Hangke; Cattani, Carlo; Wang, Wanliang

    2014-01-01

    Dimensionality reduction is an important issue for numerous applications including biomedical images analysis and living system analysis. Neighbor embedding, those representing the global and local structure as well as dealing with multiple manifolds, such as the elastic embedding techniques, can go beyond traditional dimensionality reduction methods and find better optima. Nevertheless, existing neighbor embedding algorithms can not be directly applied in classification as suffering from several problems: (1) high computational complexity, (2) nonparametric mappings, and (3) lack of class labels information. We propose a supervised neighbor embedding called discriminative elastic embedding (DEE) which integrates linear projection matrix and class labels into the final objective function. In addition, we present the Laplacian search direction for fast convergence. DEE is evaluated in three aspects: embedding visualization, training efficiency, and classification performance. Experimental results on several benchmark databases present that the proposed DEE exhibits a supervised dimensionality reduction approach which not only has strong pattern revealing capability, but also brings computational advantages over standard gradient based methods.

  16. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    PubMed

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  17. Success with an automated computer control system

    NASA Astrophysics Data System (ADS)

    Roberts, M. L.; Moore, T. L.

    1991-05-01

    LLNL has successfully implemented a distributed computer control system for automated operation of an FN tandem accelerator. The control system software utilized is the Thaumaturgic Automated Control Logic (TACL) written by the Continuous Electron Beam Accelerator Facility and co-developed with LLNL. Using TACL, accelerator components are controlled through CAMAC using a two-tiered structure. Analog control and measurement are at 12 or 16 bit precision as appropriate. Automated operation has been implemented for several nuclear analytical techniques including hydrogen depth profiling and accelerator mass Spectrometry. An additional advantage of TACL lies in its expansion capabilities. Without disturbing existing control definitions and algorithms, additional control algorithms and display functions can be implemented quickly.

  18. Pricing Structures for Automated Library Consortia.

    ERIC Educational Resources Information Center

    Machovec, George S.

    1993-01-01

    Discusses the development of successful pricing algorithms for cooperative library automation projects. Highlights include desirable characteristics of pricing measures, including simplicity and the ability to allow for system growth; problems with transaction-based systems; and a review of the pricing strategies of seven library consortia.…

  19. Guidelines for clinical supervision in health service psychology.

    PubMed

    2015-01-01

    This document outlines guidelines for supervision of students in health service psychology education and training programs. The goal was to capture optimal performance expectations for psychologists who supervise. It is based on the premises that supervisors (a) strive to achieve competence in the provision of supervision and (b) employ a competency-based, meta-theoretical approach to the supervision process. The Guidelines on Supervision were developed as a resource to inform education and training regarding the implementation of competency-based supervision. The Guidelines on Supervision build on the robust literatures on competency-based education and clinical supervision. They are organized around seven domains: supervisor competence; diversity; relationships; professionalism; assessment/evaluation/feedback; problems of professional competence, and ethical, legal, and regulatory considerations. The Guidelines on Supervision represent the collective effort of a task force convened by the American Psychological Association (APA) Board of Educational Affairs (BEA).

  20. Supervised DNA Barcodes species classification: analysis, comparisons and results

    PubMed Central

    2014-01-01

    Background Specific fragments, coming from short portions of DNA (e.g., mitochondrial, nuclear, and plastid sequences), have been defined as DNA Barcode and can be used as markers for organisms of the main life kingdoms. Species classification with DNA Barcode sequences has been proven effective on different organisms. Indeed, specific gene regions have been identified as Barcode: COI in animals, rbcL and matK in plants, and ITS in fungi. The classification problem assigns an unknown specimen to a known species by analyzing its Barcode. This task has to be supported with reliable methods and algorithms. Methods In this work the efficacy of supervised machine learning methods to classify species with DNA Barcode sequences is shown. The Weka software suite, which includes a collection of supervised classification methods, is adopted to address the task of DNA Barcode analysis. Classifier families are tested on synthetic and empirical datasets belonging to the animal, fungus, and plant kingdoms. In particular, the function-based method Support Vector Machines (SVM), the rule-based RIPPER, the decision tree C4.5, and the Naïve Bayes method are considered. Additionally, the classification results are compared with respect to ad-hoc and well-established DNA Barcode classification methods. Results A software that converts the DNA Barcode FASTA sequences to the Weka format is released, to adapt different input formats and to allow the execution of the classification procedure. The analysis of results on synthetic and real datasets shows that SVM and Naïve Bayes outperform on average the other considered classifiers, although they do not provide a human interpretable classification model. Rule-based methods have slightly inferior classification performances, but deliver the species specific positions and nucleotide assignments. On synthetic data the supervised machine learning methods obtain superior classification performances with respect to the traditional DNA Barcode

  1. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  2. Self-supervised MRI tissue segmentation by discriminative clustering.

    PubMed

    Gonçalves, Nicolau; Nikkilä, Janne; Vigário, Ricardo

    2014-02-01

    The study of brain lesions can benefit from a clear identification of transitions between healthy and pathological tissues, through the analysis of brain imaging data. Current signal processing methods, able to address these issues, often rely on strong prior information. In this article, a new method for tissue segmentation is proposed. It is based on a discriminative strategy, in a self-supervised machine learning approach. This method avoids the use of prior information, which makes it very versatile, and able to cope with different tissue types. It also returns tissue probabilities for each voxel, crucial for a good characterization of the evolution of brain lesions. Simulated as well as real benchmark data were used to validate the accuracy of the method and compare it against other segmentation algorithms.

  3. Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning

    SciTech Connect

    Pullum, Laura L; Symons, Christopher T

    2011-01-01

    Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learning system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.

  4. Autonomy and Automation

    NASA Technical Reports Server (NTRS)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  5. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    PubMed

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  6. Automated Confocal Microscope Bias Correction

    NASA Astrophysics Data System (ADS)

    Dorval, Thierry; Genovesio, Auguste

    2006-10-01

    Illumination artifacts systematically occur in 2D cross-section confocal microscopy imaging . These bias can strongly corrupt an higher level image processing such as a segmentation, a fluorescence evaluation or even a pattern extraction/recognition. This paper presents a new fully automated bias correction methodology based on large image database preprocessing. This method is very appropriate to the High Content Screening (HCS), method dedicated to drugs discovery. Our method assumes that the amount of pictures available is large enough to allow a reliable statistical computation of an average bias image. A relevant segmentation evaluation protocol and experimental results validate our correction algorithm by outperforming object extraction on non corrupted images.

  7. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  8. Intelligent robots and computer vision IX: Algorithms and techniques; Proceedings of the Meeting, Boston, MA, Nov. 5-7, 1990

    SciTech Connect

    Casasent, D.P. )

    1991-01-01

    The newest research results, trends, and developments in intelligent robots and computer vision considers topics in pattern recognition for computer vision, image processing, intelligent material handling and vision, novel preprocessing algorithms and hardware, technology for support of intelligent robots and automated systems, fuzzy logic in intelligent systems and computer vision, and segmentation techniques. Attention is given to production quality control problems, recognition in face space, automatic vehicle model identification, active stereo inspection using computer solids models, use of coordinate mapping as a method for image data reduction, integration of a computer vision system with an IBM 7535 robot, fuzzy logic controller structures, supervised pixel classification using a feature space derived from an artificial visual system, and multiresolution segmentation of forward-looking IR and SAR imagery using neural networks.

  9. Robust evaluation of time series classification algorithms for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.

    2014-03-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.

  10. Hierarchical Wireless Multimedia Sensor Networks for Collaborative Hybrid Semi-Supervised Classifier Learning

    PubMed Central

    Wang, Xue; Wang, Sheng; Bi, Daowei; Ding, Liang

    2007-01-01

    Wireless multimedia sensor networks (WMSN) have recently emerged as one of the most important technologies, driven by the powerful multimedia signal acquisition and processing abilities. Target classification is an important research issue addressed in WMSN, which has strict requirement in robustness, quickness and accuracy. This paper proposes a collaborative semi-supervised classifier learning algorithm to achieve durative online learning for support vector machine (SVM) based robust target classification. The proposed algorithm incrementally carries out the semi-supervised classifier learning process in hierarchical WMSN, with the collaboration of multiple sensor nodes in a hybrid computing paradigm. For decreasing the energy consumption and improving the performance, some metrics are introduced to evaluate the effectiveness of the samples in specific sensor nodes, and a sensor node selection strategy is also proposed to reduce the impact of inevitable missing detection and false detection. With the ant optimization routing, the learning process is implemented with the selected sensor nodes, which can decrease the energy consumption. Experimental results demonstrate that the collaborative hybrid semi-supervised classifier learning algorithm can effectively implement target classification in hierarchical WMSN. It has outstanding performance in terms of energy efficiency and time cost, which verifies the effectiveness of the sensor nodes selection and ant optimization routing.

  11. Out-of-Sample Generalizations for Supervised Manifold Learning for Classification.

    PubMed

    Vural, Elif; Guillemot, Christine

    2016-03-01

    Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.

  12. Automated Identification of Rivers and Shorelines in Aerial Imagery Using Image Texture

    DTIC Science & Technology

    2011-01-01

    defining the criteria for segmenting the image. For these cases certain automated, unsupervised (or minimally supervised), image classification ...banks, image analysis, edge finding, photography, satellite, texture, entropy 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b. ABSTRACT...high resolution bank geometry. Much of the globe is covered by various sorts of multi- or hyperspectral imagery and numerous techniques have been

  13. Empirical Analysis and Automated Classification of Security Bug Reports

    NASA Technical Reports Server (NTRS)

    Tyo, Jacob P.

    2016-01-01

    With the ever expanding amount of sensitive data being placed into computer systems, the need for effective cybersecurity is of utmost importance. However, there is a shortage of detailed empirical studies of security vulnerabilities from which cybersecurity metrics and best practices could be determined. This thesis has two main research goals: (1) to explore the distribution and characteristics of security vulnerabilities based on the information provided in bug tracking systems and (2) to develop data analytics approaches for automatic classification of bug reports as security or non-security related. This work is based on using three NASA datasets as case studies. The empirical analysis showed that the majority of software vulnerabilities belong only to a small number of types. Addressing these types of vulnerabilities will consequently lead to cost efficient improvement of software security. Since this analysis requires labeling of each bug report in the bug tracking system, we explored using machine learning to automate the classification of each bug report as a security or non-security related (two-class classification), as well as each security related bug report as specific security type (multiclass classification). In addition to using supervised machine learning algorithms, a novel unsupervised machine learning approach is proposed. An ac- curacy of 92%, recall of 96%, precision of 92%, probability of false alarm of 4%, F-Score of 81% and G-Score of 90% were the best results achieved during two-class classification. Furthermore, an accuracy of 80%, recall of 80%, precision of 94%, and F-score of 85% were the best results achieved during multiclass classification.

  14. 20 CFR 655.30 - Supervised recruitment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... recruitment activities or failed in any obligation of this part, the CO may require pre-filing supervised... provided below. (1) The CO will direct where the advertisements are to be placed. (2) The employer must supply a draft advertisement and job order to the CO for review and approval no fewer than 150...

  15. Cognitive Dissonance, Supervision, and Administrative Team Conflict

    ERIC Educational Resources Information Center

    Zepeda, Sally J.

    2006-01-01

    Purpose: The purpose of this paper is to record and summarize the tensions and problems experienced by a high school administrative team as they attempted to change supervision alongside instruction in a transition to a new block schedule. Design/methodology/approach: A case study method was used. As a case study, the research is contextual in…

  16. Cybersupervision: Conducting Supervision on the Information Superhighway.

    ERIC Educational Resources Information Center

    Coursol, Diane

    The internship experience is an integral part of the graduate program for counselor education students. The APA Code of Ethics and Standards of Practice and the ACPA code of ethics require that students receive regular supervision from site and faculty supervisors during the practicum and internship experiences. However, when student counselors…

  17. Implementing Clinical Supervision: A District Approach.

    ERIC Educational Resources Information Center

    Blake, Norine; DeMont, Roger A.

    This paper describes the Avondale School District's approach to incorporating clinical supervision within the teacher evaluation process. The development of major teacher appraisal systems, their underlying philosophies, and their characteristics are reviewed. In addition, specific processes and training activities used to develop a district model…

  18. Group Supervision of School Psychologists in Training.

    ERIC Educational Resources Information Center

    Haboush, Karen L.

    2003-01-01

    Little has been written about the nature of group supervision for school psychologists in training. In organizing the supervisor's approach to running such a group, the following theoretical models may prove effective: attachment theory, object relations theory, group theory and self psychology. Case examples are discussed in order to illustrate…

  19. Interdisciplinary Doctoral Research Supervision: A Scoping Review

    ERIC Educational Resources Information Center

    Vanstone, Meredith; Hibbert, Kathy; Kinsella, Elizabeth Anne; McKenzie, Pam; Pitman, Allan; Lingard, Lorelei

    2013-01-01

    This scoping literature review examines the topic of interdisciplinary doctoral research supervision. Interdisciplinary doctoral research programs are expanding in response to encouragement from funding agencies and enthusiasm from faculty and students. In an acknowledgement that the search for creative and innovative solutions to complex problems…

  20. Exploring Principals' Perceptions of Supervised Agricultural Experience

    ERIC Educational Resources Information Center

    Rayfield, John; Wilson, Elizabeth

    2009-01-01

    This study explored the perceptions of principals at high schools with agricultural education programs in regard to Supervised Agricultural Experience (SAE). There is evidence that suggests that high school principals' attitudes may both directly and indirectly affect factors that influence school climate and student achievement. In this study,…

  1. 32 CFR 552.65 - Command supervision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....65 Command supervision. (a) All insurance business conducted on Army installation will be by appointment. When setting up the appointment, insurance agents must identify themselves to the prospective... business capacity for the solicitation of insurance to personnel on a military installation with or...

  2. 32 CFR 552.65 - Command supervision.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....65 Command supervision. (a) All insurance business conducted on Army installation will be by appointment. When setting up the appointment, insurance agents must identify themselves to the prospective... business capacity for the solicitation of insurance to personnel on a military installation with or...

  3. Human Supervision of Robotic Site Surveys

    NASA Astrophysics Data System (ADS)

    Schreckenghost, Debra; Fong, Terrence; Milam, Tod

    2008-01-01

    Ground operators will interact remotely with robots on the lunar surface to support site preparation and survey. Astronauts will interact with robots to support outpost buildup and maintenance, as well as mission operations. One mode of interaction required for such operations is the ability to supervise robots performing routine autonomous tasks. Supervision of autonomous robotic activities requires monitoring the robot's performance of tasks with minimal human effort. This includes understanding its progress on tasks, awareness when important milestones are achieved or problems impede tasks, and reconstructing situations after the fact by relating task events to recorded data. We are developing a software framework to support such interaction among distributed human teams and robots. We are evaluating our framework for human supervision of mobile robots performing routine site survey operations. We are prototyping a system that (1) monitors data from the K10 robot performing surveys to determine the depth of permafrost at the Haughton Crater on Devon Island, (2) computes performance measures about how well the survey is going, (3) builds summaries of these performance measures, and (4) notifies to appropriate personnel when milestones are achieved or performance indicates a problem. We will evaluate our prototype using data collected during Operational Readiness Tests for the Haughton Crater field test to be conducted in July 2007. In this paper we describe our approach for human supervision of robotic activities and report the results of our evaluation with the K10 robot.

  4. 9 CFR 145.11 - Supervision.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Supervision. 145.11 Section 145.11 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... sample collecting provided for in § 145.14 and may designate qualified persons as Authorized...

  5. 9 CFR 146.10 - Supervision.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Supervision. 146.10 Section 146.10 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... authorize qualified persons as State Inspectors to perform the selecting and testing of participating...

  6. 9 CFR 145.11 - Supervision.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Supervision. 145.11 Section 145.11 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... sample collecting provided for in § 145.14 and may designate qualified persons as Authorized...

  7. 9 CFR 145.11 - Supervision.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Supervision. 145.11 Section 145.11 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... sample collecting provided for in § 145.14 and may designate qualified persons as Authorized...

  8. 9 CFR 145.11 - Supervision.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Supervision. 145.11 Section 145.11 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... sample collecting provided for in § 145.14 and may designate qualified persons as Authorized...

  9. 9 CFR 146.10 - Supervision.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Supervision. 146.10 Section 146.10 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... authorize qualified persons as State Inspectors to perform the selecting and testing of participating...

  10. A "Critical" Perspective for Clinical Supervision.

    ERIC Educational Resources Information Center

    Smyth, John

    1988-01-01

    Asserts that teachers must form collaborative alliances and nonevaluative dialogue to regain control over their own professional development. The empowering potential of Robert Goldhammer's and Morris Cogan's original conceptions of clinical supervision have been distorted through the process of redefinition by vested interests to a form of…

  11. Management & Supervision Personnel Administration Training; General Reference.

    ERIC Educational Resources Information Center

    United States Government Printing Office, Washington, DC. Training and Career Development Div.

    This report lists 329 books in the library of the Training and Career Development Division of the Personnel Service. The books are listed under six categories. They are: personnel administration (46), management and supervision (60), general reference (57), training (20), American Management Association (AMA) publications (118), and United States…

  12. 36 CFR 25.3 - Supervision; suspensions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....3 Section 25.3 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR NATIONAL MILITARY PARKS; LICENSED GUIDE SERVICE REGULATIONS § 25.3 Supervision; suspensions. (a) The guide service will operate under the direction of the superintendent or his designated representative....

  13. Remote Video Supervision in Adapted Physical Education

    ERIC Educational Resources Information Center

    Kelly, Luke; Bishop, Jason

    2013-01-01

    Supervision for beginning adapted physical education (APE) teachers and inservice general physical education teachers who are learning to work with students with disabilities poses a number of challenges. The purpose of this article is to describe a project aimed at developing a remote video system that could be used by a university supervisor to…

  14. Spirituality and School Counselor Education and Supervision

    ERIC Educational Resources Information Center

    Gallo, Laura L.

    2014-01-01

    Spirituality is an area that has not received a great deal of attention in supervision, yet it can have substantial effects on the counseling process. A definition of spirituality that allows for a variety of worldviews can be useful to both counselor and client as it helps strengthen the counseling relationship and lessen differences between…

  15. Computer Monitor Supervision: A Clinical Note.

    ERIC Educational Resources Information Center

    Scherl, Charles R.; Haley, Jay

    2000-01-01

    Presents communication procedures for supervisors and therapy trainees that have been developed as a result of the use of computer technology. Using the computer as a supervision tool, therapy can be influenced by the supervisor while minimizing disruption. Successes and pitfalls in a master's level practicum course in family therapy are…

  16. 75 FR 59799 - Office of Thrift Supervision

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-28

    ... Office of Thrift Supervision Purchase of Branch Office(s) and/or Transfer of Assets/Liabilities AGENCY... collection displays a currently valid OMB control number. As part of the approval process, we invite comments... technology. We will summarize the comments that we receive and include them in the OTS request for...

  17. Research Supervision: The Research Management Matrix

    ERIC Educational Resources Information Center

    Maxwell, T. W.; Smyth, Robyn

    2010-01-01

    We briefly make a case for re-conceptualising research project supervision/advising as the consideration of three inter-related areas: the learning and teaching process; developing the student; and producing the research project/outcome as a social practice. We use this as our theoretical base for an heuristic tool, "the research management…

  18. A Social Reconstruction Model of Supervision.

    ERIC Educational Resources Information Center

    Seda, E. Elliott

    This paper presents a social reconstructionist model of supervision. The model connects schools and society, and considers the vital role teachers, students, staff, and others play in developing, designing, and implementing reforms in school and society. The model is based on the philosophy of social reconstructionism, which views schools as…

  19. Supporting Placement Supervision in Clinical Exercise Physiology

    ERIC Educational Resources Information Center

    Sealey, Rebecca M.; Raymond, Jacqueline; Groeller, Herb; Rooney, Kieron; Crabb, Meagan; Watt, Kerrianne

    2015-01-01

    The continued engagement of the professional workforce as supervisors is critical for the sustainability and growth of work-integrated learning activities in university degrees. This study investigated factors that influence the willingness and ability of clinicians to continue to supervise clinical exercise physiology work-integrated learning…

  20. 20 CFR 656.21 - Supervised recruitment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... resumes or applications for the job opportunity to the CO for referral to the employer; (ii) Include an... job requirements, which can not exceed any of the requirements entered on the application form by the..., post-filing supervised recruitment may be required of the employer for the pending application...