Science.gov

Sample records for automatic model based

  1. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  2. Automatic brain segmentation and validation: image-based versus atlas-based deformable models

    NASA Astrophysics Data System (ADS)

    Aboutanos, Georges B.; Dawant, Benoit M.

    1997-04-01

    Due to the complexity of the brain surface, there is at present no segmentation method that proves to work automatically and consistently on any 3-D magnetic resonance (MR) images of the head. There is a definite lack of validation studies related to automatic brain extraction. In this work we present an image-base automatic method for brain segmentation and use its results as an input to a deformable model method which we call image-based deformable model. Combining image-based methods with a deformable model can lead to a robust segmentation method without requiring registration of the image volumes into a standardized space, the automation of which remains challenging for pathological cases. We validate our segmentation results on 3-D MP-RAGE (magnetization-prepared rapid gradient-echo) volumes for the image model prior- and post-deformation and compare it to an atlas model prior- and post-deformation. Our validation is based on volume measurement comparison to manually segmented data. Our analysis shows that the improvement afforded by the deformable model methods are statistically significant, however there are no significant differences between the image-based and atlas-based deformable model methods.

  3. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  4. Automatic shape model building based on principal geodesic analysis bootstrapping.

    PubMed

    Dam, Erik B; Fletcher, P Thomas; Pizer, Stephen M

    2008-04-01

    We present a novel method for automatic shape model building from a collection of training shapes. The result is a shape model consisting of the mean model and the major modes of variation with a dense correspondence map between individual shapes. The framework consists of iterations where a medial shape representation is deformed into the training shapes followed by computation of the shape mean and modes of shape variation. In the first iteration, a generic shape model is used as starting point - in the following iterations in the bootstrap method, the resulting mean and modes from the previous iteration are used. Thereby, we gradually capture the shape variation in the training collection better and better. Convergence of the method is explicitly enforced. The method is evaluated on collections of artificial training shapes where the expected shape mean and modes of variation are known by design. Furthermore, collections of real prostates and cartilage sheets are used in the evaluation. The evaluation shows that the method is able to capture the training shapes close to the attainable accuracy already in the first iteration. Furthermore, the correspondence properties measured by generality, specificity, and compactness are improved during the shape model building iterations.

  5. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    NASA Astrophysics Data System (ADS)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  6. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  7. Sparse appearance model-based algorithm for automatic segmentation and identification of articulated hand bones

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Peng, Zhigang; Liao, Shu; Shinagawa, Yoshihisa; Zhan, Yiqiang; Hermosillo, Gerardo; Zhou, Xiang Sean

    2014-03-01

    Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.

  8. Feature Extraction Using Attributed Scattering Center Models for Model-Based Automatic Target Recognition (ATR)

    DTIC Science & Technology

    2005-10-01

    systems employing synthetic aperture radar . This report summarizes the major technical accomplishments that were realized. We developed a set of...automatic target recognition, ATR performance prediction, synthetic aperture radar 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON...Std. Z39-18 Contents 1 INTRODUCTION 1 2 ATTRIBUTED SCATTERING MODELS FOR SYNTHETIC APER- TURE RADAR 6 2.1 Introduction

  9. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    NASA Astrophysics Data System (ADS)

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.

    2017-01-01

    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  10. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  11. Model-based vision system for automatic recognition of structures in dental radiographs

    NASA Astrophysics Data System (ADS)

    Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.

    1991-07-01

    X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.

  12. A chest-shape target automatic detection method based on Deformable Part Models

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  13. [Automatic detection of exudates in retinal images based on threshold moving average models].

    PubMed

    Wisaeng, K; Hiransakolwong, N; Pothiruk, E

    2015-01-01

    Since exudate diagnostic procedures require the attention of an expert ophthalmologist as well as regular monitoring of the disease, the workload of expert ophthalmologists will eventually exceed the current screening capabilities. Retinal imaging technology is a current practice screening capability providing a great potential solution. In this paper, a fast and robust automatic detection of exudates based on moving average histogram models of the fuzzy image was applied, and then the better histogram was derived. After segmentation of the exudate candidates, the true exudates were pruned based on Sobel edge detector and automatic Otsu's thresholding algorithm that resulted in the accurate location of the exudates in digital retinal images. To compare the performance of exudate detection methods we have constructed a large database of digital retinal images. The method was trained on a set of 200 retinal images, and tested on a completely independent set of 1220 retinal images. Results show that the exudate detection method performs overall best sensitivity, specificity, and accuracy of 90.42%, 94.60%, and 93.69%, respectively.

  14. Contour-based automatic crater recognition using digital elevation models from Chang'E missions

    NASA Astrophysics Data System (ADS)

    Zuo, Wei; Zhang, Zhoubin; Li, Chunlai; Wang, Rongwu; Yu, Linjie; Geng, Liang

    2016-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). Through DEM and image processing, this method can be used to reconstruct contour surfaces, extract and combine contour lines, set the characteristic parameters of crater morphology, and establish a crater pattern recognition program. The method has been tested and verified with DEM data from Chang'E-1 (CE-1) and Chang'E-2 (CE-2), showing a strong crater recognition ability with high detection rate, high robustness, and good adaptation to recognize various craters with different diameter and morphology. The method has been used to identify craters with high precision and accuracy on the Moon. The results meet requirements for supporting exploration and related scientific research for the Moon and planets.

  15. Atlas-based automatic mouse brain image segmentation revisited: model complexity vs. image registration.

    PubMed

    Bai, Jordan; Trinh, Thi Lan Huong; Chuang, Kai-Hsiang; Qiu, Anqi

    2012-07-01

    Although many atlas-based segmentation methods have been developed and validated for the human brain, limited work has been done for the mouse brain. This paper investigated roles of image registration and segmentation model complexity in the mouse brain segmentation. We employed four segmentation models [single atlas, multiatlas, simultaneous truth and performance level estimation (STAPLE) and Markov random field (MRF) via four different image registration algorithms (affine, B-spline free-form deformation (FFD), Demons and large deformation diffeomorphic metric mapping (LDDMM)] for delineating 19 structures from in vivo magnetic resonance microscopy images. We validated their accuracies against manual segmentation. Our results revealed that LDDMM outperformed Demons, FFD and affine in any of the segmentation models. Under the same registration, increasing segmentation model complexity from single atlas to multiatlas, STAPLE or MRF significantly improved the segmentation accuracy. Interestingly, the multiatlas-based segmentation using nonlinear registrations (FFD, Demons and LDDMM) had similar performance to their STAPLE counterparts, while they both outperformed their MRF counterparts. Furthermore, when the single-atlas affine segmentation was used as reference, the improvement due to nonlinear registrations (FFD, Demons and LDDMM) in the single-atlas segmentation model was greater than that due to increasing model complexity (multiatlas, STAPLE and MRF affine segmentation). Hence, we concluded that image registration plays a more crucial role in the atlas-based automatic mouse brain segmentation as compared to model complexity. Multiple atlases with LDDMM can best improve the segmentation accuracy in the mouse brain among all segmentation models tested in this study.

  16. Automatic mathematical modeling for space application

    NASA Technical Reports Server (NTRS)

    Wang, Caroline K.

    1987-01-01

    A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.

  17. A Parameter Tuning Scheme of Sea-ice Model Based on Automatic Differentiation Technique

    NASA Astrophysics Data System (ADS)

    Kim, J. G.; Hovland, P. D.

    2001-05-01

    Automatic diferentiation (AD) technique was used to illustrate a new approach for parameter tuning scheme of an uncoupled sea-ice model. Atmospheric forcing field of 1992 obtained from NCEP data was used as enforcing variables in the study. The simulation results were compared with the observed ice movement provided by the International Arctic Buoy Programme (IABP). All of the numerical experiments were based on a widely used dynamic and thermodynamic model for simulating the seasonal sea-ice chnage of the main Arctic ocean. We selected five dynamic and thermodynamic parameters for the tuning process in which the cost function defined by the norm of the difference between observed and simulated ice drift locations was minimized. The selected parameters are the air and ocean drag coefficients, the ice strength constant, the turning angle at ice-air/ocean interface, and the bulk sensible heat transfer coefficient. The drag coefficients were the major parameters to control sea-ice movement and extent. The result of the study shows that more realistic simulations of ice thickness distribution was produced by tuning the simulated ice drift trajectories. In the tuning process, the L-BFCGS-B minimization algorithm of a quasi-Newton method was used. The derivative information required in the minimization iterations was provided by the AD processed Fortran code. Compared with a conventional approach, AD generated derivative code provided fast and robust computations of derivative information.

  18. Modelling Pasture-based Automatic Milking System Herds: Grazeable Forage Options

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    One of the challenges to increase milk production in a large pasture-based herd with an automatic milking system (AMS) is to grow forages within a 1-km radius, as increases in walking distance increases milking interval and reduces yield. The main objective of this study was to explore sustainable forage option technologies that can supply high amount of grazeable forages for AMS herds using the Agricultural Production Systems Simulator (APSIM) model. Three different basic simulation scenarios (with irrigation) were carried out using forage crops (namely maize, soybean and sorghum) for the spring-summer period. Subsequent crops in the three scenarios were forage rape over-sown with ryegrass. Each individual simulation was run using actual climatic records for the period from 1900 to 2010. Simulated highest forage yields in maize, soybean and sorghum- (each followed by forage rape-ryegrass) based rotations were 28.2, 22.9, and 19.3 t dry matter/ha, respectively. The simulations suggested that the irrigation requirement could increase by up to 18%, 16%, and 17% respectively in those rotations in El-Niño years compared to neutral years. On the other hand, irrigation requirement could increase by up to 25%, 23%, and 32% in maize, soybean and sorghum based rotations in El-Nino years compared to La-Nina years. However, irrigation requirement could decrease by up to 8%, 7%, and 13% in maize, soybean and sorghum based rotations in La-Nina years compared to neutral years. The major implication of this study is that APSIM models have potentials in devising preferred forage options to maximise grazeable forage yield which may create the opportunity to grow more forage in small areas around the AMS which in turn will minimise walking distance and milking interval and thus increase milk production. Our analyses also suggest that simulation analysis may provide decision support during climatic uncertainty. PMID:25924963

  19. Automatic language identification based on Gaussian mixture model and universal background model

    NASA Astrophysics Data System (ADS)

    Qu, Dan; Wang, Bingxi; Wei, Xin

    2003-09-01

    When compared with speech technologies in speech processing, automatic language identification is a relatively new yet difficult problem. In this paper, a language identification algorithm is provided and some experiments are conducted using OGI multi-language telephone speech corpus (OGI-TS). Then experiments results are described. It is shown that GMM-UBM is another efficient method to language identification problems.

  20. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  1. Automatic detection of echolocation clicks based on a Gabor model of their waveform.

    PubMed

    Madhusudhana, Shyam; Gavrilov, Alexander; Erbe, Christine

    2015-06-01

    Prior research has shown that echolocation clicks of several species of terrestrial and marine fauna can be modelled as Gabor-like functions. Here, a system is proposed for the automatic detection of a variety of such signals. By means of mathematical formulation, it is shown that the output of the Teager-Kaiser Energy Operator (TKEO) applied to Gabor-like signals can be approximated by a Gaussian function. Based on the inferences, a detection algorithm involving the post-processing of the TKEO outputs is presented. The ratio of the outputs of two moving-average filters, a Gaussian and a rectangular filter, is shown to be an effective detection parameter. Detector performance is assessed using synthetic and real (taken from MobySound database) recordings. The detection method is shown to work readily with a variety of echolocation clicks and in various recording scenarios. The system exhibits low computational complexity and operates several times faster than real-time. Performance comparisons are made to other publicly available detectors including pamguard.

  2. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  3. Comparison of function approximation, heuristic, and derivative-based methods for automatic calibration of computationally expensive groundwater bioremediation models

    NASA Astrophysics Data System (ADS)

    Mugunthan, Pradeep; Shoemaker, Christine A.; Regis, Rommel G.

    2005-11-01

    The performance of function approximation (FA) methods is compared to heuristic and derivative-based nonlinear optimization methods for automatic calibration of biokinetic parameters of a groundwater bioremediation model of chlorinated ethenes on a hypothetical and a real field case. For the hypothetical case, on the basis of 10 trials on two different objective functions, the FA methods had the lowest mean and smaller deviation of the objective function among all algorithms for a combined Nash-Sutcliffe objective and among all but the derivative-based algorithm for a total squared error objective. The best algorithms in the hypothetical case were applied to calibrate eight parameters to data obtained from a site in California. In three trials the FA methods outperformed heuristic and derivative-based methods for both objective functions. This study indicates that function approximation methods could be a more efficient alternative to heuristic and derivative-based methods for automatic calibration of computationally expensive bioremediation models.

  4. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  5. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  6. A physics-based defects model and inspection algorithm for automatic visual inspection

    NASA Astrophysics Data System (ADS)

    Xie, Yu; Ye, Yutang; Zhang, Jing; Liu, Li; Liu, Lin

    2014-01-01

    The representation of physical characteristics is the most essential feature of mathematical models used for the detection of defects in automatic inspection systems. However, the feature of defects and formation of the defect image are not considered enough in traditional algorithms. This paper presents a mathematical model for defect inspection, denoted as the localized defects image model (LDIM), is different because it modeling the features of manual inspection, using a local defect merit function to quantify the cost that a pixel is defective. This function comprises two components: color deviation and color fluctuation. Parameters related to statistical data of the background region of images are also taken into consideration. Test results demonstrate that the model matches the definition of defects, as defined by international industrial standards IPC-A-610D and IPC-A-600G. Furthermore, the proposed approach enhances small defects to improve detection rates. Evaluation using a defects images database returned a 100% defect inspection rate with 0% false detection. Proving that this method could be practically applied in manufacture to quantify inspection standards and minimize false alarms resulting from human error.

  7. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  8. Automatic sex determination of skulls based on a statistical shape model.

    PubMed

    Luo, Li; Wang, Mengyang; Tian, Yun; Duan, Fuqing; Wu, Zhongke; Zhou, Mingquan; Rozenholc, Yves

    2013-01-01

    Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy.

  9. Pedestrians' intention to jaywalk: Automatic or planned? A study based on a dual-process model in China.

    PubMed

    Xu, Yaoshan; Li, Yongjuan; Zhang, Feng

    2013-01-01

    The present study investigates the determining factors of Chinese pedestrians' intention to violate traffic laws using a dual-process model. This model divides the cognitive processes of intention formation into controlled analytical processes and automatic associative processes. Specifically, the process explained by the augmented theory of planned behavior (TPB) is controlled, whereas the process based on past behavior is automatic. The results of a survey conducted on 323 adult pedestrian respondents showed that the two added TPB variables had different effects on the intention to violate, i.e., personal norms were significantly related to traffic violation intention, whereas descriptive norms were non-significant predictors. Past behavior significantly but uniquely predicted the intention to violate: the results of the relative weight analysis indicated that the largest percentage of variance in pedestrians' intention to violate was explained by past behavior (42%). According to the dual-process model, therefore, pedestrians' intention formation relies more on habit than on cognitive TPB components and social norms. The implications of these findings for the development of intervention programs are discussed.

  10. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  11. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  12. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    PubMed Central

    Ebied, Hala Mousher; Hussein, Ashraf Saad; Tolba, Mohamed Fahmy

    2014-01-01

    This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used. PMID:25254226

  13. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  14. An active contour-based atlas registration model applied to automatic subthalamic nucleus targeting on MRI: method and validation.

    PubMed

    Duay, Valérie; Bresson, Xavier; Castro, Javier Sanchez; Pollo, Claudio; Cuadra, Meritxell Bach; Thiran, Jean-Philippe

    2008-01-01

    This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

  15. Low-rank and sparse decomposition based shape model and probabilistic atlas for automatic pathological organ segmentation.

    PubMed

    Shi, Changfa; Cheng, Yuanzhi; Wang, Jinke; Wang, Yadong; Mori, Kensaku; Tamura, Shinichi

    2017-02-22

    One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters.

  16. An automatic image-based modelling method applied to forensic infography.

    PubMed

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  17. An Automatic Image-Based Modelling Method Applied to Forensic Infography

    PubMed Central

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  18. A statistically based seasonal precipitation forecast model with automatic predictor selection and its application to central and south Asia

    NASA Astrophysics Data System (ADS)

    Gerlitz, Lars; Vorogushyn, Sergiy; Apel, Heiko; Gafurov, Abror; Unger-Shayesteh, Katy; Merz, Bruno

    2016-11-01

    The study presents a statistically based seasonal precipitation forecast model, which automatically identifies suitable predictors from globally gridded sea surface temperature (SST) and climate variables by means of an extensive data-mining procedure and explicitly avoids the utilization of typical large-scale climate indices. This leads to an enhanced flexibility of the model and enables its automatic calibration for any target area without any prior assumption concerning adequate predictor variables. Potential predictor variables are derived by means of a cell-wise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability-based cluster analysis. Finally, for every month and lead time, an individual random-forest-based forecast model is constructed, by means of the preliminary generated predictor variables. Monthly predictions are aggregated to running 3-month periods in order to generate a seasonal precipitation forecast. The model is applied and evaluated for selected target regions in central and south Asia. Particularly for winter and spring in westerly-dominated central Asia, correlation coefficients between forecasted and observed precipitation reach values up to 0.48, although the variability of precipitation rates is strongly underestimated. Likewise, for the monsoonal precipitation amounts in the south Asian target area, correlations of up to 0.5 were detected. The skill of the model for the dry winter season over south Asia is found to be low. A sensitivity analysis with well-known climate indices, such as the El Niño- Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and the East Atlantic (EA) pattern, reveals the major large-scale controlling mechanisms of the seasonal precipitation climate for each target area. For the central Asian target areas, both

  19. An approach of crater automatic recognition based on contour digital elevation model from Chang'E Missions

    NASA Astrophysics Data System (ADS)

    Zuo, W.; Li, C.; Zhang, Z.; Li, H.; Feng, J.

    2015-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). First, we mapped 16-bits DEM to 256 gray scales for data compression, then for the purposes of better visualization, the grayscale is converted into RGB image. After that, a median filter is applied twice to DEM for data optimization, which produced smooth, continuous outlines for subsequent construction of contour plane. Considering the fact that the morphology of crater on contour plane can be approximately expressed as an ellipse or circle, we extract the outer boundaries of contour plane with the same color(gray value) as targets for further identification though a 8- neighborhood counterclockwise searching method. Then, A library of training samples is constructed based on above targets calculated from some sample DEM data, from which real crater targets are labeled as positive samples manually, and non-crater objects are labeled as negative ones. Some morphological feathers are calculated for all these samples, which are major axis (L), circumference(C), area inside the boundary(S), and radius of the largest inscribed circle(R). We use R/L, R/S, C/L, C/S, R/C, S/L as the key factors for identifying craters, and apply Fisher discrimination method on the sample library to calculate the weight of each factor and determine the discrimination formula, which is then applied to DEM data for identifying lunar craters. The method has been tested and verified with DEM data from CE-1 and CE-2, showing strong recognition ability and robustness and is applicable for the recognition of craters with various diameters and significant morphological differences, making fast and accurate automatic crater recognition possible.

  20. Automatic barcode recognition method based on adaptive edge detection and a mapping model

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Chen, Lianzheng; Chen, Yifan; Lee, Yong; Yin, Zhouping

    2016-09-01

    An adaptive edge detection and mapping (AEDM) algorithm to address the challenging one-dimensional barcode recognition task with the existence of both image degradation and barcode shape deformation is presented. AEDM is an edge detection-based method that has three consecutive phases. The first phase extracts the scan lines from a cropped image. The second phase involves detecting the edge points in a scan line. The edge positions are assumed to be the intersecting points between a scan line and a corresponding well-designed reference line. The third phase involves adjusting the preliminary edge positions to more reasonable positions by employing prior information of the coding rules. Thus, a universal edge mapping model is established to obtain the coding positions of each edge in this phase, followed by a decoding procedure. The Levenberg-Marquardt method is utilized to solve this nonlinear model. The computational complexity and convergence analysis of AEDM are also provided. Several experiments were implemented to evaluate the performance of AEDM algorithm. The results indicate that the efficient AEDM algorithm outperforms state-of-the-art methods and adequately addresses multiple issues, such as out-of-focus blur, nonlinear distortion, noise, nonlinear optical illumination, and situations that involve the combinations of these issues.

  1. Automatic Detection of Student Mental Models Based on Natural Language Student Input during Metacognitive Skill Training

    ERIC Educational Resources Information Center

    Lintean, Mihai; Rus, Vasile; Azevedo, Roger

    2012-01-01

    This article describes the problem of detecting the student mental models, i.e. students' knowledge states, during the self-regulatory activity of prior knowledge activation in MetaTutor, an intelligent tutoring system that teaches students self-regulation skills while learning complex science topics. The article presents several approaches to…

  2. The Research on Automatic Construction of Domain Model Based on Deep Web Query Interfaces

    NASA Astrophysics Data System (ADS)

    JianPing, Gu

    The integration of services is transparent, meaning that users no longer face the millions of Web services, do not care about the required data stored, but do not need to learn how to obtain these data. In this paper, we analyze the uncertainty of schema matching, and then propose a series of similarity measures. To reduce the cost of execution, we propose the type-based optimization method and schema matching pruning method of numeric data. Based on above analysis, we propose the uncertain schema matching method. The experiments prove the effectiveness and efficiency of our method.

  3. Automatic target recognition with image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-09-01

    In past decades, the solution to ATR problem has been thought of as a solution to the Pattern Recognition problem. The reasons that Pattern Recognition problem has never been solved successfully and reliably for real-world images are more serious than lack of appropriate ideas. Vision is a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. Vision mechanisms cannot be completely understood apart from the informational processes related to knowledge and intelligence. A reliable solution to the ATR problem is possible only within the solution of a more generic Image Understanding Problem. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise computations of 3-D models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations make possible invariant recognition of a real-world object as exemplar of a class. This allows for creating ATR systems, reliable in field conditions.

  4. Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.

    PubMed

    Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo

    2016-09-01

    In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.

  5. A neurocomputational model of automatic sequence production.

    PubMed

    Helie, Sebastien; Roeder, Jessica L; Vucovich, Lauren; Rünger, Dennis; Ashby, F Gregory

    2015-07-01

    Most behaviors unfold in time and include a sequence of submovements or cognitive activities. In addition, most behaviors are automatic and repeated daily throughout life. Yet, relatively little is known about the neurobiology of automatic sequence production. Past research suggests a gradual transfer from the associative striatum to the sensorimotor striatum, but a number of more recent studies challenge this role of the BG in automatic sequence production. In this article, we propose a new neurocomputational model of automatic sequence production in which the main role of the BG is to train cortical-cortical connections within the premotor areas that are responsible for automatic sequence production. The new model is used to simulate four different data sets from human and nonhuman animals, including (1) behavioral data (e.g., RTs), (2) electrophysiology data (e.g., single-neuron recordings), (3) macrostructure data (e.g., TMS), and (4) neurological circuit data (e.g., inactivation studies). We conclude with a comparison of the new model with existing models of automatic sequence production and discuss a possible new role for the BG in automaticity and its implication for Parkinson's disease.

  6. A semi-automatic framework for highway extraction and vehicle detection based on a geometric deformable model

    NASA Astrophysics Data System (ADS)

    Niu, Xutong

    Road extraction and vehicle detection are two of the most important steps of traffic flow analysis from multi-frame aerial photographs. The traditional way of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs. It is tedious and time-consuming work. To improve this process, this research presents a new semi-automatic framework for highway extraction and vehicle detection from aerial photographs. The basis of the new framework is a geometric deformable model. This model refers to the minimization of an objective function that connects the optimization problem with the propagation of regular curves. Utilizing implicit representation of two-dimensional curve, the implementation of this model is capable of dealing with topological changes during curve deformation process and the output is independent of the position of the initial curves. A seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. Manually selected seed points can be automatically propagated throughout a whole highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction and vehicle detection from a large orthophoto mosaic. In this research, vehicles on the extracted highway network were detected with an 83% success rate.

  7. Automatic enrollment for gait-based person re-identification

    NASA Astrophysics Data System (ADS)

    Ortells, Javier; Martín-Félez, Raúl; Mollineda, Ramón A.

    2015-02-01

    Automatic enrollment involves a critical decision-making process within people re-identification context. However, this process has been traditionally undervalued. This paper studies the problem of automatic person enrollment from a realistic perspective relying on gait analysis. Experiments simulating random flows of people with considerable appearance variations between different observations of a person have been conducted, modeling both short- and longterm scenarios. Promising results based on ROC analysis show that automatically enrolling people by their gait is affordable with high success rates.

  8. Biological models for automatic target detection

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce

    2008-04-01

    Humans are better at detecting targets in literal imagery than any known algorithm. Recent advances in modeling visual processes have resulted from f-MRI brain imaging with humans and the use of more invasive techniques with monkeys. There are four startling new discoveries. 1) The visual cortex does not simply process an incoming image. It constructs a physics based model of the image. 2) Coarse category classification and range-to-target are estimated quickly - possibly through the dorsal pathway of the visual cortex, combining rapid coarse processing of image data with expectations and goals. This data is then fed back to lower levels to resize the target and enhance the recognition process feeding forward through the ventral pathway. 3) Giant photosensitive retinal ganglion cells provide data for maintaining circadian rhythm (time-of-day) and modeling the physics of the light source. 4) Five filter types implemented by the neurons of the primary visual cortex have been determined. A computer model for automatic target detection has been developed based upon these recent discoveries. It uses an artificial neural network architecture with multiple feed-forward and feedback paths. Our implementation's efficiency derives from the observation that any 2-D filter kernel can be approximated by a sum of 2-D box functions. And, a 2-D box function easily decomposes into two 1-D box functions. Further efficiency is obtained by decomposing the largest neural filter into a high pass filter and a more sparsely sampled low pass filter.

  9. Automatic detection of lung nodules in CT datasets based on stable 3D mass-spring models.

    PubMed

    Cascio, D; Magro, R; Fauci, F; Iacomi, M; Raso, G

    2012-11-01

    We propose a computer-aided detection (CAD) system which can detect small-sized (from 3mm) pulmonary nodules in spiral CT scans. A pulmonary nodule is a small lesion in the lungs, round-shaped (parenchymal nodule) or worm-shaped (juxtapleural nodule). Both kinds of lesions have a radio-density greater than lung parenchyma, thus appearing white on the images. Lung nodules might indicate a lung cancer and their early stage detection arguably improves the patient survival rate. CT is considered to be the most accurate imaging modality for nodule detection. However, the large amount of data per examination makes the full analysis difficult, leading to omission of nodules by the radiologist. We developed an advanced computerized method for the automatic detection of internal and juxtapleural nodules on low-dose and thin-slice lung CT scan. This method consists of an initial selection of nodule candidates list, the segmentation of each candidate nodule and the classification of the features computed for each segmented nodule candidate.The presented CAD system is aimed to reduce the number of omissions and to decrease the radiologist scan examination time. Our system locates with the same scheme both internal and juxtapleural nodules. For a correct volume segmentation of the lung parenchyma, the system uses a Region Growing (RG) algorithm and an opening process for including the juxtapleural nodules. The segmentation and the extraction of the suspected nodular lesions from CT images by a lung CAD system constitutes a hard task. In order to solve this key problem, we use a new Stable 3D Mass-Spring Model (MSM) combined with a spline curves reconstruction process. Our model represents concurrently the characteristic gray value range, the directed contour information as well as shape knowledge, which leads to a much more robust and efficient segmentation process. For distinguishing the real nodules among nodule candidates, an additional classification step is applied

  10. An automatic composition model of Chinese folk music

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaomei; Li, Dongyang; Wang, Lei; Shen, Lin; Gao, Yanyuan; Zhu, Yuanyuan

    2017-03-01

    The automatic composition has achieved rich results in recent decades, including Western and some other areas of music. However, the automatic composition of Chinese music is less involved. After thousands of years of development, Chinese folk music has a wealth of resources. To design an automatic composition mode, learn the characters of Chinese folk melody and imitate the creative process of music is of some significance. According to the melodic features of Chinese folk music, a Chinese folk music composition based on Markov model is proposed to analyze Chinese traditional music. Folk songs with typical Chinese national characteristics are selected for analysis. In this paper, an example of automatic composition is given. The experimental results show that this composition model can produce music with characteristics of Chinese folk music.

  11. Hidden Markov models in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  12. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  13. Automatic estimation of midline shift in patients with cerebral glioma based on enhanced voigt model and local symmetry.

    PubMed

    Chen, Mingyang; Elazab, Ahmed; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Li, Xiaodong; Hu, Qingmao

    2015-12-01

    Cerebral glioma is one of the most aggressive space-occupying diseases, which will exhibit midline shift (MLS) due to mass effect. MLS has been used as an important feature for evaluating the pathological severity and patients' survival possibility. Automatic quantification of MLS is challenging due to deformation, complex shape and complex grayscale distribution. An automatic method is proposed and validated to estimate MLS in patients with gliomas diagnosed using magnetic resonance imaging (MRI). The deformed midline is approximated by combining mechanical model and local symmetry. An enhanced Voigt model which takes into account the size and spatial information of lesion is devised to predict the deformed midline. A composite local symmetry combining local intensity symmetry and local intensity gradient symmetry is proposed to refine the predicted midline within a local window whose size is determined according to the pinhole camera model. To enhance the MLS accuracy, the axial slice with maximum MSL from each volumetric data has been interpolated from a spatial resolution of 1 mm to 0.33 mm. The proposed method has been validated on 30 publicly available clinical head MRI scans presenting with MLS. It delineates the deformed midline with maximum MLS and yields a mean difference of 0.61 ± 0.27 mm, and average maximum difference of 1.89 ± 1.18 mm from the ground truth. Experiments show that the proposed method will yield better accuracy with the geometric center of pathology being the geometric center of tumor and the pathological region being the whole lesion. It has also been shown that the proposed composite local symmetry achieves significantly higher accuracy than the traditional local intensity symmetry and the local intensity gradient symmetry. To the best of our knowledge, for delineation of deformed midline, this is the first report on both quantification of gliomas and from MRI, which hopefully will provide valuable information for diagnosis

  14. Modelling Pasture-based Automatic Milking System Herds: The Impact of Large Herd on Milk Yield and Economics.

    PubMed

    Islam, M R; Clark, C E F; Garcia, S C; Kerrisk, K L

    2015-07-01

    The aim of this modelling study was to investigate the effect of large herd size (and land areas) on walking distances and milking interval (MI), and their impact on milk yield and economic penalties when 50% of the total diets were provided from home grown feed either as pasture or grazeable complementary forage rotation (CFR) in an automatic milking system (AMS). Twelve scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as 'moderate'; optimum pasture utilisation of 19.7 t DM/ha, termed as 'high') and 2 rates of incorporation of grazeable complementary forage system (CFS: 0, 30%; CFS = 65% farm is CFR and 35% of farm is pasture) were investigated. Walking distances, energy loss due to walking, MI, reduction in milk yield and income loss were calculated for each treatment based on information available in the literature. With moderate pasture utilisation and 0% CFR, increasing the herd size from 400 to 800 cows resulted in an increase in total walking distances between the parlour and the paddock from 3.5 to 6.3 km. Consequently, MI increased from 15.2 to 16.4 h with increased herd size from 400 to 800 cows. High pasture utilisation (allowing for an increased stocking density) reduced the total walking distances up to 1 km, thus reduced the MI by up to 0.5 h compared to the moderate pasture, 800 cow herd combination. The high pasture utilisation combined with 30% of the farm in CFR in the farm reduced the total walking distances by up to 1.7 km and MI by up to 0.8 h compared to the moderate pasture and 800 cow herd combination. For moderate pasture utilisation, increasing the herd size from 400 to 800 cows resulted in more dramatic milk yield penalty as yield increasing from c.f. 2.6 and 5.1 kg/cow/d respectively, which incurred a loss of up to $AU 1.9/cow/d. Milk yield losses of 0.61 kg and 0.25 kg for every km increase in total walking distance (voluntary return

  15. Modelling Pasture-based Automatic Milking System Herds: The Impact of Large Herd on Milk Yield and Economics

    PubMed Central

    Islam, M. R.; Clark, C. E. F.; Garcia, S. C.; Kerrisk, K. L.

    2015-01-01

    The aim of this modelling study was to investigate the effect of large herd size (and land areas) on walking distances and milking interval (MI), and their impact on milk yield and economic penalties when 50% of the total diets were provided from home grown feed either as pasture or grazeable complementary forage rotation (CFR) in an automatic milking system (AMS). Twelve scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as ‘moderate’; optimum pasture utilisation of 19.7 t DM/ha, termed as ‘high’) and 2 rates of incorporation of grazeable complementary forage system (CFS: 0, 30%; CFS = 65% farm is CFR and 35% of farm is pasture) were investigated. Walking distances, energy loss due to walking, MI, reduction in milk yield and income loss were calculated for each treatment based on information available in the literature. With moderate pasture utilisation and 0% CFR, increasing the herd size from 400 to 800 cows resulted in an increase in total walking distances between the parlour and the paddock from 3.5 to 6.3 km. Consequently, MI increased from 15.2 to 16.4 h with increased herd size from 400 to 800 cows. High pasture utilisation (allowing for an increased stocking density) reduced the total walking distances up to 1 km, thus reduced the MI by up to 0.5 h compared to the moderate pasture, 800 cow herd combination. The high pasture utilisation combined with 30% of the farm in CFR in the farm reduced the total walking distances by up to 1.7 km and MI by up to 0.8 h compared to the moderate pasture and 800 cow herd combination. For moderate pasture utilisation, increasing the herd size from 400 to 800 cows resulted in more dramatic milk yield penalty as yield increasing from c.f. 2.6 and 5.1 kg/cow/d respectively, which incurred a loss of up to $AU 1.9/cow/d. Milk yield losses of 0.61 kg and 0.25 kg for every km increase in total walking distance (voluntary

  16. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  17. Automatic anatomy recognition via fuzzy object models

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Falcão, Alexandre X.; Ciesielski, Krzysztof C.; Miranda, Paulo A. V.; Matsumoto, Monica; Grevera, George J.; Saboury, Babak; Torigian, Drew A.

    2012-02-01

    To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition (AAR) during radiological image reading becomes essential. As part of this larger goal, last year at this conference we presented a fuzzy strategy for building body-wide group-wise anatomic models. In the present paper, we describe the further advances made in fuzzy modeling and the algorithms and results achieved for AAR by using the fuzzy models. The proposed AAR approach consists of three distinct steps: (a) Building fuzzy object models (FOMs) for each population group G. (b) By using the FOMs to recognize the individual objects in any given patient image I under group G. (c) To delineate the recognized objects in I. This paper will focus mostly on (b). FOMs are built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. The hierarchical pose relationships from the parent to offspring are codified in the FOMs. Several approaches are being explored currently, grouped under two strategies, both being hierarchical: (ra1) those using search strategies; (ra2) those strategizing a one-shot approach by which the model pose is directly estimated without searching. Based on 32 patient CT data sets each from the thorax and abdomen and 25 objects modeled, our analysis indicates that objects do not all scale uniformly with patient size. Even the simplest among the (ra2) strategies of recognizing the root object and then placing all other descendants as per the learned parent-to-offspring pose relationship bring the models on an average within about 18 mm of the true locations.

  18. Automatic mathematical modeling for real time simulation program (AI application)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1989-01-01

    A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.

  19. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  20. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    PubMed

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-07

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  1. An information processing model of anxiety: automatic and strategic processes.

    PubMed

    Beck, A T; Clark, D A

    1997-01-01

    A three-stage schema-based information processing model of anxiety is described that involves: (a) the initial registration of a threat stimulus; (b) the activation of a primal threat mode; and (c) the secondary activation of more elaborative and reflective modes of thinking. The defining elements of automatic and strategic processing are discussed with the cognitive bias in anxiety reconceptualized in terms of a mixture of automatic and strategic processing characteristics depending on which stage of the information processing model is under consideration. The goal in the treatment of anxiety is to deactivate the more automatic primal threat mode and to strengthen more constructive reflective modes of thinking. Arguments are presented for the inclusion of verbal mediation as a necessary but not sufficient component in the cognitive and behavioral treatment of anxiety.

  2. Roads Centre-Axis Extraction in Airborne SAR Images: AN Approach Based on Active Contour Model with the Use of Semi-Automatic Seeding

    NASA Astrophysics Data System (ADS)

    Lotte, R. G.; Sant'Anna, S. J. S.; Almeida, C. M.

    2013-05-01

    Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar) imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR) airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM), followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes). The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  3. Automatic draft reading based on image processing

    NASA Astrophysics Data System (ADS)

    Tsujii, Takahiro; Yoshida, Hiromi; Iiguni, Youji

    2016-10-01

    In marine transportation, a draft survey is a means to determine the quantity of bulk cargo. Automatic draft reading based on computer image processing has been proposed. However, the conventional draft mark segmentation may fail when the video sequence has many other regions than draft marks and a hull, and the estimated waterline is inherently higher than the true one. To solve these problems, we propose an automatic draft reading method that uses morphological operations to detect draft marks and estimate the waterline for every frame with Canny edge detection and a robust estimation. Moreover, we emulate surveyors' draft reading process for getting the understanding of a shipper and a receiver. In an experiment in a towing tank, the draft reading error of the proposed method was <1 cm, showing the advantage of the proposed method. It is also shown that accurate draft reading has been achieved in a real-world scene.

  4. Feature based volume decomposition for automatic hexahedral mesh generation

    SciTech Connect

    LU,YONG; GADH,RAJIT; TAUTGES,TIMOTHY J.

    2000-02-21

    Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

  5. Using automatic programming for simulating reliability network models

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  6. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  7. Automatic GPS satellite based subsidence measurements for Ekofisk

    SciTech Connect

    Mes, M.J.; Luttenberger, C.; Landau, H.; Gustavsen, K.

    1995-12-01

    A fully automatic procedure for the measurement of subsidence of many platforms in almost real time is presented. Such a method is important for developments which may be subject to subsidence and where reliable subsidence and rate measurements are needed for safety, planning of remedial work and verification of subsidence models. Automatic GPS satellite based subsidence measurements are made continuously on platforms in the North Sea Ekofisk Field area. A description of the system is given. The derivation of those parameters which give optimal measurement accuracy is described, the results of these derivations are provided. GPS satellite based measurements are equivalent to pressure gauge based platform subsidence measurements, but they are much cheaper to install and maintain. In addition, GPS based measurements are not subject to drift of any gauges. GPS measurements were coupled to oceanographic quantities such as the platform deck clearance. These quantities now follow from GPS based measurements.

  8. Automatic identification of fault surfaces through Object Based Image Analysis of a Digital Elevation Model in the submarine area of the North Aegean Basin

    NASA Astrophysics Data System (ADS)

    Argyropoulou, Evangelia

    2015-04-01

    The current study was focused on the seafloor morphology of the North Aegean Basin in Greece, through Object Based Image Analysis (OBIA) using a Digital Elevation Model. The goal was the automatic extraction of morphologic and morphotectonic features, resulting into fault surface extraction. An Object Based Image Analysis approach was developed based on the bathymetric data and the extracted features, based on morphological criteria, were compared with the corresponding landforms derived through tectonic analysis. A digital elevation model of 150 meters spatial resolution was used. At first, slope, profile curvature, and percentile were extracted from this bathymetry grid. The OBIA approach was developed within the eCognition environment. Four segmentation levels were created having as a target "level 4". At level 4, the final classes of geomorphological features were classified: discontinuities, fault-like features and fault surfaces. On previous levels, additional landforms were also classified, such as continental platform and continental slope. The results of the developed approach were evaluated by two methods. At first, classification stability measures were computed within eCognition. Then, qualitative and quantitative comparison of the results took place with a reference tectonic map which has been created manually based on the analysis of seismic profiles. The results of this comparison were satisfactory, a fact which determines the correctness of the developed OBIA approach.

  9. MATURE: A Model Driven bAsed Tool to Automatically Generate a langUage That suppoRts CMMI Process Areas spEcification

    NASA Astrophysics Data System (ADS)

    Musat, David; Castaño, Víctor; Calvo-Manzano, Jose A.; Garbajosa, Juan

    Many companies have achieved a higher quality in their processes by using CMMI. Process definition may be efficiently supported by software tools. A higher automation level will make process improvement and assessment activities easier to be adapted to customer needs. At present, automation of CMMI is based on tools that support practice definition in a textual way. These tools are often enhanced spreadsheets. In this paper, following the Model Driven Development paradigm (MDD), a tool that supports automatic generation of a language that can be used to specify process areas practices is presented. The generation is performed from a metamodel that represents CMMI. This tool, differently from others available, can be customized according to user needs. Guidelines to specify the CMMI metamodel are also provided. The paper also shows how this approach can support other assessment methods.

  10. Nonlinear spectro-temporal features based on a cochlear model for automatic speech recognition in a noisy situation.

    PubMed

    Choi, Yong-Sun; Lee, Soo-Young

    2013-09-01

    A nonlinear speech feature extraction algorithm was developed by modeling human cochlear functions, and demonstrated as a noise-robust front-end for speech recognition systems. The algorithm was based on a model of the Organ of Corti in the human cochlea with such features as such as basilar membrane (BM), outer hair cells (OHCs), and inner hair cells (IHCs). Frequency-dependent nonlinear compression and amplification of OHCs were modeled by lateral inhibition to enhance spectral contrasts. In particular, the compression coefficients had frequency dependency based on the psychoacoustic evidence. Spectral subtraction and temporal adaptation were applied in the time-frame domain. With long-term and short-term adaptation characteristics, these factors remove stationary or slowly varying components and amplify the temporal changes such as onset or offset. The proposed features were evaluated with a noisy speech database and showed better performance than the baseline methods such as mel-frequency cepstral coefficients (MFCCs) and RASTA-PLP in unknown noisy conditions.

  11. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  12. Case-based synthesis in automatic advertising creation system

    NASA Astrophysics Data System (ADS)

    Zhuang, Yueting; Pan, Yunhe

    1995-08-01

    Advertising (ads) is an important design area. Though many interactive ad-design softwares have come into commercial use, none of them ever support the intelligent work -- automatic ad creation. The potential for this is enormous. This paper gives a description of our current work in research of an automatic advertising creation system (AACS). After careful analysis of the mental behavior of a human ad designer, we conclude that case-based approach is appropriate to its intelligent modeling. A model for AACS is given in the paper. A case in ads is described as two parts: the creation process and the configuration of the ads picture, with detailed data structures given in the paper. Along with the case representation, we put forward an algorithm. Some issues such as similarity measure computing, and case adaptation have also been discussed.

  13. Digital movie-based on automatic titrations.

    PubMed

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%.

  14. Automatic paper sliceform design from 3D solid models.

    PubMed

    Le-Nguyen, Tuong-Vu; Low, Kok-Lim; Ruiz, Conrado; Le, Sang N

    2013-11-01

    A paper sliceform or lattice-style pop-up is a form of papercraft that uses two sets of parallel paper patches slotted together to make a foldable structure. The structure can be folded flat, as well as fully opened (popped-up) to make the two sets of patches orthogonal to each other. Automatic design of paper sliceforms is still not supported by existing computational models and remains a challenge. We propose novel geometric formulations of valid paper sliceform designs that consider the stability, flat-foldability and physical realizability of the designs. Based on a set of sufficient construction conditions, we also present an automatic algorithm for generating valid sliceform designs that closely depict the given 3D solid models. By approximating the input models using a set of generalized cylinders, our method significantly reduces the search space for stable and flat-foldable sliceforms. To ensure the physical realizability of the designs, the algorithm automatically generates slots or slits on the patches such that no two cycles embedded in two different patches are interlocking each other. This guarantees local pairwise assembility between patches, which is empirically shown to lead to global assembility. Our method has been demonstrated on a number of example models, and the output designs have been successfully made into real paper sliceforms.

  15. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit

  16. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially ‘concentrating’ feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  17. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances.

    PubMed

    Islam, M R; Garcia, S C; Clark, C E F; Kerrisk, K L

    2015-06-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially 'concentrating' feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  18. Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications to Automatic Target Recognition

    DTIC Science & Technology

    1999-05-01

    experiment was repeated 1000=T times using independent MonteCarlo simulations. A. Uniform Target Test Set. For each Nk ,...,2,1= a model kM...correct identification (PID) and probability of false alarm (PFA) over T=1000 MonteCarlo realizations of the experiment data. The false alarm rate is

  19. Connecting Lines of Research on Task Model Variables, Automatic Item Generation, and Learning Progressions in Game-Based Assessment

    ERIC Educational Resources Information Center

    Graf, Edith Aurora

    2014-01-01

    In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…

  20. Applying Hierarchical Model Calibration to Automatically Generated Items.

    ERIC Educational Resources Information Center

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  1. Automatic Speech Recognition Based on Electromyographic Biosignals

    NASA Astrophysics Data System (ADS)

    Jou, Szu-Chen Stan; Schultz, Tanja

    This paper presents our studies of automatic speech recognition based on electromyographic biosignals captured from the articulatory muscles in the face using surface electrodes. We develop a phone-based speech recognizer and describe how the performance of this recognizer improves by carefully designing and tailoring the extraction of relevant speech feature toward electromyographic signals. Our experimental design includes the collection of audibly spoken speech simultaneously recorded as acoustic data using a close-speaking microphone and as electromyographic signals using electrodes. Our experiments indicate that electromyographic signals precede the acoustic signal by about 0.05-0.06 seconds. Furthermore, we introduce articulatory feature classifiers, which had recently shown to improved classical speech recognition significantly. We describe that the classification accuracy of articulatory features clearly benefits from the tailored feature extraction. Finally, these classifiers are integrated into the overall decoding framework applying a stream architecture. Our final system achieves a word error rate of 29.9% on a 100-word recognition task.

  2. [Research on automatic external defibrillator based on DSP].

    PubMed

    Jing, Jun; Ding, Jingyan; Zhang, Wei; Hong, Wenxue

    2012-10-01

    Electrical defibrillation is the most effective way to treat the ventricular tachycardia (VT) and ventricular fibrillation (VF). An automatic external defibrillator based on DSP is introduced in this paper. The whole design consists of the signal collection module, the microprocessor controlingl module, the display module, the defibrillation module and the automatic recognition algorithm for VF and non VF, etc. This automatic external defibrillator has achieved goals such as ECG signal real-time acquisition, ECG wave synchronous display, data delivering to U disk and automatic defibrillate when shockable rhythm appears, etc.

  3. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  4. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  5. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Astrophysics Data System (ADS)

    Morgan, Steve

    1992-09-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  6. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Technical Reports Server (NTRS)

    Morgan, Steve

    1992-01-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  7. LADAR And FLIR Based Sensor Fusion For Automatic Target Classification

    NASA Astrophysics Data System (ADS)

    Selzer, Fred; Gutfinger, Dan

    1989-01-01

    The purpose of this report is to show results of automatic target classification and sensor fusion for forward looking infrared (FLIR) and Laser Radar sensors. The sensor fusion data base was acquired from the Naval Weapon Center and it consists of coregistered Laser RaDAR (range and reflectance image), FLIR (raw and preprocessed image) and TV. Using this data base we have developed techniques to extract relevant object edges from the FLIR and LADAR which are correlated to wireframe models. The resulting correlation coefficients from both the LADAR and FLIR are fused using either the Bayesian or the Dempster-Shafer combination method so as to provide a higher confidence target classifica-tion level output. Finally, to minimize the correlation process the wireframe models are modified to reflect target range (size of target) and target orientation which is extracted from the LADAR reflectance image.

  8. 11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND BUILT BY WES. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  9. Automatic Building Information Model Query Generation

    SciTech Connect

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  10. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  11. Graphical models and automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Bilmes, Jeff A.

    2002-11-01

    Graphical models (GMs) are a flexible statistical abstraction that has been successfully used to describe problems in a variety of different domains. Commonly used for ASR, hidden Markov models are only one example of the large space of models constituting GMs. Therefore, GMs are useful to understand existing ASR approaches and also offer a promising path towards novel techniques. In this work, several such ways are described including (1) using both directed and undirected GMs to represent sparse Gaussian and conditional Gaussian distributions, (2) GMs for representing information fusion and classifier combination, (3) GMs for representing hidden articulatory information in a speech signal, (4) structural discriminability where the graph structure itself is discriminative, and the difficulties that arise when learning discriminative structure (5) switching graph structures, where the graph may change dynamically, and (6) language modeling. The graphical model toolkit (GMTK), a software system for general graphical-model based speech recognition and time series analysis, will also be described, including a number of GMTK's features that are specifically geared to ASR.

  12. Image analysis techniques associated with automatic data base generation.

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  13. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  14. a Sensor Based Automatic Ovulation Prediction System for Dairy Cows

    NASA Astrophysics Data System (ADS)

    Mottram, Toby; Hart, John; Pemberton, Roy

    2000-12-01

    Sensor scientists have been successful in developing detectors for tiny concentrations of rare compounds, but the work is rarely applied in practice. Any but the most trivial application of sensors requires a specification that should include a sampling system, a sensor, a calibration system and a model of how the information is to be used to control the process of interest. The specification of the sensor system should ask the following questions. How will the material to be analysed be sampled? What decision can be made with the information available from a proposed sensor? This project provides a model of a systems approach to the implementation of automatic ovulation prediction in dairy cows. A healthy well managed dairy cow should calve every year to make the best use of forage. As most cows are inseminated artificially it is of vital importance mat cows are regularly monitored for signs of oestrus. The pressure on dairymen to manage more cows often leads to less time being available for observation of cows to detect oestrus. This, together with breeding and feeding for increased yields, has led to a reduction in reproductive performance. In the UK the typical dairy farmer could save € 12800 per year if ovulation could be predicted accurately. Research over a number of years has shown that regular analysis of milk samples with tests based on enzyme linked immunoassay (ELISA) can map the ovulation cycle. However, these tests require the farmer to implement a manually operated sampling and analysis procedure and the technique has not been widely taken up. The best potential method of achieving 98% specificity of prediction of ovulation is to adapt biosensor techniques to emulate the ELISA tests automatically in the milking system. An automated ovulation prediction system for dairy cows is specified. The system integrates a biosensor with automatic milk sampling and a herd management database. The biosensor is a screen printed carbon electrode system capable of

  15. Incremental logistic regression for customizing automatic diagnostic models.

    PubMed

    Tortajada, Salvador; Robles, Montserrat; García-Gómez, Juan Miguel

    2015-01-01

    In the last decades, and following the new trends in medicine, statistical learning techniques have been used for developing automatic diagnostic models for aiding the clinical experts throughout the use of Clinical Decision Support Systems. The development of these models requires a large, representative amount of data, which is commonly obtained from one hospital or a group of hospitals after an expensive and time-consuming gathering, preprocess, and validation of cases. After the model development, it has to overcome an external validation that is often carried out in a different hospital or health center. The experience is that the models show underperformed expectations. Furthermore, patient data needs ethical approval and patient consent to send and store data. For these reasons, we introduce an incremental learning algorithm base on the Bayesian inference approach that may allow us to build an initial model with a smaller number of cases and update it incrementally when new data are collected or even perform a new calibration of a model from a different center by using a reduced number of cases. The performance of our algorithm is demonstrated by employing different benchmark datasets and a real brain tumor dataset; and we compare its performance to a previous incremental algorithm and a non-incremental Bayesian model, showing that the algorithm is independent of the data model, iterative, and has a good convergence.

  16. Edge Segment-Based Automatic Video Surveillance

    NASA Astrophysics Data System (ADS)

    Hossain, M. Julius; Dewan, M. Ali Akber; Chae, Oksam

    2007-12-01

    This paper presents a moving-object segmentation algorithm using edge information as segment. The proposed method is developed to address challenges due to variations in ambient lighting and background contents. We investigated the suitability of the proposed algorithm in comparison with the traditional-intensity-based as well as edge-pixel-based detection methods. In our method, edges are extracted from video frames and are represented as segments using an efficiently designed edge class. This representation helps to obtain the geometric information of edge in the case of edge matching and moving-object segmentation; and facilitates incorporating knowledge into edge segment during background modeling and motion tracking. An efficient approach for background initialization and robust method of edge matching is presented, to effectively reduce the risk of false alarm due to illumination change and camera motion while maintaining the high sensitivity to the presence of moving object. Detected moving edges are utilized along with watershed algorithm for extracting video object plane (VOP) with more accurate boundary. Experiment results with real image sequence reflect that the proposed method is suitable for automated video surveillance applications in various monitoring systems.

  17. Using suggestion to model different types of automatic writing.

    PubMed

    Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q

    2014-05-01

    Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled.

  18. Analysis of Automatic Automotive Gear Boxes by Means of Versatile Graph-Based Methods

    NASA Astrophysics Data System (ADS)

    Drewniak, J.; Kopeć, J.; Zawiślak, S.

    Automotive gear boxes are special mechanisms which are created based upon some planetary gears and additionally equipped in control systems. The control system allows for an activation of particular drives. In the present paper, some graph based models of these boxes are considered i.e. contour, bond and mixed graphs. An exemplary automatic gear box is considered. Based upon the introduced models, ratios for some drives have been calculated. Advantages of the proposed method of modeling are: algorithmic approach and simplicity.

  19. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor

    ERIC Educational Resources Information Center

    Rus, Vasile; Lintean, Mihai; Azevedo, Roger

    2009-01-01

    This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…

  20. Towards Automatic Processing of Virtual City Models for Simulations

    NASA Astrophysics Data System (ADS)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  1. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  2. Knowledge-based system for automatic MBR control.

    PubMed

    Comas, J; Meabe, E; Sancho, L; Ferrero, G; Sipma, J; Monclús, H; Rodriguez-Roda, I

    2010-01-01

    MBR technology is currently challenging traditional wastewater treatment systems and is increasingly selected for WWTP upgrading. MBR systems typically are constructed on a smaller footprint, and provide superior treated water quality. However, the main drawback of MBR technology is that the permeability of membranes declines during filtration due to membrane fouling, which for a large part causes the high aeration requirements of an MBR to counteract this fouling phenomenon. Due to the complex and still unknown mechanisms of membrane fouling it is neither possible to describe clearly its development by means of a deterministic model, nor to control it with a purely mathematical law. Consequently the majority of MBR applications are controlled in an "open-loop" way i.e. with predefined and fixed air scour and filtration/relaxation or backwashing cycles, and scheduled inline or offline chemical cleaning as a preventive measure, without taking into account the real needs of membrane cleaning based on its filtration performance. However, existing theoretical and empirical knowledge about potential cause-effect relations between a number of factors (influent characteristics, biomass characteristics and operational conditions) and MBR operation can be used to build a knowledge-based decision support system (KB-DSS) for the automatic control of MBRs. This KB-DSS contains a knowledge-based control module, which, based on real time comparison of the current permeability trend with "reference trends", aims at optimizing the operation and energy costs and decreasing fouling rates. In practice the automatic control system proposed regulates the set points of the key operational variables controlled in MBR systems (permeate flux, relaxation and backwash times, backwash flows and times, aeration flow rates, chemical cleaning frequency, waste sludge flow rate and recycle flow rates) and identifies its optimal value. This paper describes the concepts and the 3-level architecture

  3. Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis

    NASA Astrophysics Data System (ADS)

    Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.

    2013-11-01

    Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.

  4. On Automatic Support to Indexing a Life Sciences Data Base.

    ERIC Educational Resources Information Center

    Vleduts-Stokolov, N.

    1982-01-01

    Describes technique developed as automatic support to subject heading indexing at BIOSIS based on use of formalized language for semantic representation of biological texts and subject headings. Language structures, experimental results, and analysis of journal/subject heading and author/subject heading correlation data are discussed. References…

  5. A Network of Automatic Control Web-Based Laboratories

    ERIC Educational Resources Information Center

    Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian

    2011-01-01

    This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…

  6. Time series modeling for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Sokolnikov, Andre

    2012-05-01

    Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the prediction process for the image creation or reconstruction. The results are provided.

  7. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  8. A manual and an automatic TERS based virus discrimination

    NASA Astrophysics Data System (ADS)

    Olschewski, Konstanze; Kämmer, Evelyn; Stöckel, Stephan; Bocklitz, Thomas; Deckert-Gaudig, Tanja; Zell, Roland; Cialla-May, Dana; Weber, Karina; Deckert, Volker; Popp, Jürgen

    2015-02-01

    Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%.Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses

  9. Automatic Structure-Based Code Generation from Coloured Petri Nets: A Proof of Concept

    NASA Astrophysics Data System (ADS)

    Kristensen, Lars Michael; Westergaard, Michael

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs (PP-CPNs) which is a subclass of CPNs equipped with an explicit separation of process control flow, message passing, and access to shared and local data. We show how PP-CPNs caters for a four phase structure-based automatic code generation process directed by the control flow of processes. The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF).

  10. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2016-08-01

    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  11. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  12. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  13. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  14. Automatic computational models of acoustical category features: Talking versus singing

    NASA Astrophysics Data System (ADS)

    Gerhard, David

    2003-10-01

    The automatic discrimination between acoustical categories has been an increasingly interesting problem in the fields of computer listening, multimedia databases, and music information retrieval. A system is presented which automatically generates classification models, given a set of destination classes and a set of a priori labeled acoustic events. Computational models are created using comparative probability density estimations. For the specific example presented, the destination classes are talking and singing. Individual feature models are evaluated using two measures: The Kologorov-Smirnov distance measures feature separation, and accuracy is measured using absolute and relative metrics. The system automatically segments the event set into a user-defined number (n) of development subsets, and runs a development cycle for each set, generating n separate systems, each of which is evaluated using the above metrics to improve overall system accuracy and to reduce inherent data skew from any one development subset. Multiple features for the same acoustical categories are then compared for underlying feature overlap using cross-correlation. Advantages of automated computational models include improved system development and testing, shortened development cycle, and automation of common system evaluation tasks. Numerical results are presented relating to the talking/singing classification problem.

  15. Edge density based automatic detection of inflammation in colonoscopy videos.

    PubMed

    Ševo, I; Avramović, A; Balasingham, I; Elle, O J; Bergsland, J; Aabakken, L

    2016-05-01

    Colon cancer is one of the deadliest diseases where early detection can prolong life and can increase the survival rates. The early stage disease is typically associated with polyps and mucosa inflammation. The often used diagnostic tools rely on high quality videos obtained from colonoscopy or capsule endoscope. The state-of-the-art image processing techniques of video analysis for automatic detection of anomalies use statistical and neural network methods. In this paper, we investigated a simple alternative model-based approach using texture analysis. The method can easily be implemented in parallel processing mode for real-time applications. A characteristic texture of inflamed tissue is used to distinguish between inflammatory and healthy tissues, where an appropriate filter kernel was proposed and implemented to efficiently detect this specific texture. The basic method is further improved to eliminate the effect of blood vessels present in the lower part of the descending colon. Both approaches of the proposed method were described in detail and tested in two different computer experiments. Our results show that the inflammatory region can be detected in real-time with an accuracy of over 84%. Furthermore, the experimental study showed that it is possible to detect certain segments of video frames containing inflammations with the detection accuracy above 90%.

  16. Research on Air Traffic Control Automatic System Software Reliability Based on Markov Chain

    NASA Astrophysics Data System (ADS)

    Wang, Xinglong; Liu, Weixiang

    Ensuring the space of air craft and high efficiency of air traffic are the main job tasks of the air traffic control automatic system. An Air Traffic Control Automatic System (ATCAS) and Markov model is put forward in this paper, which collected the 36 month failure data of ATCAS; A method to predict the s1,s2,s3 of ATCAS is based on Markov chain which predicts and validates the Reliability of ATCTS according to the deriving theory of Reliability. The experimental results show that the method can be used for the future research and proved to be practicable.

  17. Automatic Tortuosity-Based Retinopathy of Prematurity Screening System

    NASA Astrophysics Data System (ADS)

    Sukkaew, Lassada; Uyyanonvara, Bunyarit; Makhanov, Stanislav S.; Barman, Sarah; Pangputhipong, Pannet

    Retinopathy of Prematurity (ROP) is an infant disease characterized by increased dilation and tortuosity of the retinal blood vessels. Automatic tortuosity evaluation from retinal digital images is very useful to facilitate an ophthalmologist in the ROP screening and to prevent childhood blindness. This paper proposes a method to automatically classify the image into tortuous and non-tortuous. The process imitates expert ophthalmologists' screening by searching for clearly tortuous vessel segments. First, a skeleton of the retinal blood vessels is extracted from the original infant retinal image using a series of morphological operators. Next, we propose to partition the blood vessels recursively using an adaptive linear interpolation scheme. Finally, the tortuosity is calculated based on the curvature of the resulting vessel segments. The retinal images are then classified into two classes using segments characterized by the highest tortuosity. For an optimal set of training parameters the prediction is as high as 100%.

  18. The acoustic-modeling problem in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Brown, Peter F.

    1987-12-01

    This thesis examines the acoustic-modeling problem in automatic speech recognition from an information-theoretic point of view. This problem is to design a speech-recognition system which can extract from the speech waveform as much information as possible about the corresponding word sequence. The information extraction process is broken down into two steps: a signal processing step which converts a speech waveform into a sequence of information bearing acoustic feature vectors, and a step which models such a sequence. This thesis is primarily concerned with the use of hidden Markov models to model sequences of feature vectors which lie in a continuous space such as R sub N. It explores the trade-off between packing a lot of information into such sequences and being able to model them accurately. The difficulty of developing accurate models of continuous parameter sequences is addressed by investigating a method of parameter estimation which is specifically designed to cope with inaccurate modeling assumptions.

  19. Wind modeling and lateral control for automatic landing

    NASA Technical Reports Server (NTRS)

    Holley, W. E.; Bryson, A. E., Jr.

    1975-01-01

    For the purposes of aircraft control system design and analysis, the wind can be characterized by a mean component which varies with height and by turbulent components which are described by the von Karman correlation model. The aircraft aero-dynamic forces and moments depend linearly on uniform and gradient gust components obtained by averaging over the aircraft's length and span. The correlations of the averaged components are then approximated by the outputs of linear shaping filters forced by white noise. The resulting model of the crosswind shear and turbulence effects is used in the design of a lateral control system for the automatic landing of a DC-8 aircraft.

  20. Automatic learning-based beam angle selection for thoracic IMRT

    SciTech Connect

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G. Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  1. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  2. A learning-based automatic spinal MRI segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Samarabandu, Jagath; Garvin, Greg; Chhem, Rethy; Li, Shuo

    2008-03-01

    Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances the clinical diagnosis. Although many algorithms have been proposed, it is still challenging to achieve an automatic clinical segmentation which requires speed and robustness. Automatically segmenting the vertebral column in Magnetic Resonance Imaging (MRI) image is extremely challenging as variations in soft tissue contrast and radio-frequency (RF) in-homogeneities cause image intensity variations. Moveover, little work has been done in this area. We proposed a generic slice-independent, learning-based method to automatically segment the vertebrae in spinal MRI images. A main feature of our contributions is that the proposed method is able to segment multiple images of different slices simultaneously. Our proposed method also has the potential to be imaging modality independent as it is not specific to a particular imaging modality. The proposed method consists of two stages: candidate generation and verification. The candidate generation stage is aimed at obtaining the segmentation through the energy minimization. In this stage, images are first partitioned into a number of image regions. Then, Support Vector Machines (SVM) is applied on those pre-partitioned image regions to obtain the class conditional distributions, which are then fed into an energy function and optimized with the graph-cut algorithm. The verification stage applies domain knowledge to verify the segmented candidates and reject unsuitable ones. Experimental results show that the proposed method is very efficient and robust with respect to image slices.

  3. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively.

  4. Automatic textual annotation of video news based on semantic visual object extraction

    NASA Astrophysics Data System (ADS)

    Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem

    2003-12-01

    In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.

  5. Automatic data processing and crustal modeling on Brazilian Seismograph Network

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.; Chimpliganond, C.; Peres Rocha, M.; Franca, G.; Marotta, G. S.; Von Huelsen, M. G.

    2014-12-01

    The Brazilian Seismograph Network (RSBR) is a joint project of four Brazilian research institutions with the support of Petrobras and its main goal is to monitor the seismic activities, generate alerts of seismic hazard and provide data for Brazilian tectonic and structure research. Each institution operates and maintain their seismic network, sharing their data in an virtual private network. These networks have seismic stations transmitting in real time (or near real time) raw data to their respective data centers, where the seismogram files are then shared with other institutions. Currently RSBR has 57 broadband stations, some of them operating since 1994, transmitting data through mobile phone data networks or satellite links. Station management, data acquisition and storage and earthquake data processing at the Seismological Observatory of the University of Brasilia is automatically performed by SeisComP3 (SC3). However, the SC3 data processing is limited to event detection, location and magnitude. An automatic crustal modeling system was designed process raw seismograms and generate 1D S-velocity profiles. This system automatically calculates receiver function (RF) traces, Vp/Vs ratio (h-k stack) and surface waves dispersion (SWD) curves. These traces and curves are then used to calibrate the lithosphere seismic velocity models using a joint inversion scheme The results can be reviewed by an analyst, change processing parameters and selecting/neglecting RF traces and SWD curves used in lithosphere model calibration. The results to be obtained from this system will be used to generate and update a quasi-3D crustal model of Brazil's territory.

  6. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  7. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  8. Pavement crack identification based on automatic threshold iterative method

    NASA Astrophysics Data System (ADS)

    Lu, Guofeng; Zhao, Qiancheng; Liao, Jianguo; He, Yongbiao

    2017-01-01

    Crack detection is an important issue in concrete infrastructure. Firstly, the accuracy of crack geometry parameters measurement is directly affected by the extraction accuracy, the same as the accuracy of the detection system. Due to the properties of unpredictability, randomness and irregularity, it is difficult to establish recognition model of crack. Secondly, various image noise, caused by irregular lighting conditions, dark spots, freckles and bump, exerts an influence on the crack detection accuracy. Peak threshold selection method is improved in this paper, and the processing of enhancement, smoothing and denoising is conducted before iterative threshold selection, which can complete the automatic selection of the threshold value in real time and stability.

  9. Automatic Target Recognition Based on Cross-Plot

    PubMed Central

    Wong, Kelvin Kian Loong; Abbott, Derek

    2011-01-01

    Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository. PMID:21980508

  10. A fully automatic system for acid-base coulometric titrations

    PubMed Central

    Cladera, A.; Caro, A.; Estela, J. M.; Cerdà, V.

    1990-01-01

    An automatic system for acid-base titrations by electrogeneration of H+ and OH- ions, with potentiometric end-point detection, was developed. The system includes a PC-compatible computer for instrumental control, data acquisition and processing, which allows up to 13 samples to be analysed sequentially with no human intervention. The system performance was tested on the titration of standard solutions, which it carried out with low errors and RSD. It was subsequently applied to the analysis of various samples of environmental and nutritional interest, specifically waters, soft drinks and wines. PMID:18925283

  11. Automatic indexing of scanned documents: a layout-based approach

    NASA Astrophysics Data System (ADS)

    Esser, Daniel; Schuster, Daniel; Muthmann, Klemens; Berger, Michael; Schill, Alexander

    2012-01-01

    Archiving official written documents such as invoices, reminders and account statements in business and private area gets more and more important. Creating appropriate index entries for document archives like sender's name, creation date or document number is a tedious manual work. We present a novel approach to handle automatic indexing of documents based on generic positional extraction of index terms. For this purpose we apply the knowledge of document templates stored in a common full text search index to find index positions that were successfully extracted in the past.

  12. Automatic target recognition based on cross-plot.

    PubMed

    Wong, Kelvin Kian Loong; Abbott, Derek

    2011-01-01

    Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository.

  13. Research on fiber diameter automatic measurement based on image detection

    NASA Astrophysics Data System (ADS)

    Chen, Xiaogang; Jiang, Yu; Shen, Wen; Han, Guangjie

    2010-10-01

    In this paper, we present a method of Fiber Diameter Automatic Measurement(FDAM). This design is based on image detection technology in order to provide a rapid and accurate measurement of average fiber diameter. Firstly, a preprocessing mechanism is proposed to the sample fiber image by using improved median filtering algorithm, then we introduce edge detection with Sobel operator to detect target fiber, finally diameter of random point and average diameter of the fiber can be measured precisely with searching shortest path algorithm. Experiments are conducted to prove the accuracy of the measurement, and simulations show that measurement errors caused by human factors could be eliminated to a desirable level.

  14. Automatic translation of digraph to fault-tree models

    NASA Astrophysics Data System (ADS)

    Iverson, David L.

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  15. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  16. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  17. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  18. The Automatic Measuring Machines and Ground-Based Astrometry

    NASA Astrophysics Data System (ADS)

    Sergeeva, T. P.

    The introduction of the automatic measuring machines into the astronomical investigations a little more then a quarter of the century ago has increased essentially the range and the scale of projects which the astronomers could capable to realize since then. During that time, there have been dozens photographic sky surveys, which have covered all of the sky more then once. Due to high accuracy and speed of automatic measuring machines the photographic astrometry has obtained the opportunity to create the high precision catalogs such as CpC2. Investigations of the structure and kinematics of the stellar components of our Galaxy has been revolutionized in the last decade by the advent of automated plate measuring machines. But in an age of rapidly evolving electronic detectors and space-based catalogs, expected soon, one could think that the twilight hours of astronomical photography have become. On opposite of that point of view such astronomers as D.Monet (U.S.N.O.), L.G.Taff (STScI), M.K.Tsvetkov (IA BAS) and some other have contended the several ways of the photographic astronomy evolution. One of them sounds as: "...special efforts must be taken to extract useful information from the photographic archives before the plates degrade and the technology required to measure them disappears". Another is the minimization of the systematic errors of ground-based star catalogs by employment of certain reduction technology and a dense enough and precise space-based star reference catalogs. In addition to that the using of the higher resolution and quantum efficiency emulsions such as Tech Pan and some of the new methods of processing of the digitized information hold great promise for future deep (B<25) surveys (Bland-Hawthorn et al. 1993, AJ, 106, 2154). Thus not only the hard working of all existing automatic measuring machines is apparently needed but the designing, development and employment of a new generation of portable, mobile scanners is very necessary. The

  19. Automatic Dynamic Aircraft Modeler (ADAM) for the Computer Program NASTRAN

    NASA Technical Reports Server (NTRS)

    Griffis, H.

    1985-01-01

    Large general purpose finite element programs require users to develop large quantities of input data. General purpose pre-processors are used to decrease the effort required to develop structural models. Further reduction of effort can be achieved by specific application pre-processors. Automatic Dynamic Aircraft Modeler (ADAM) is one such application specific pre-processor. General purpose pre-processors use points, lines and surfaces to describe geometric shapes. Specifying that ADAM is used only for aircraft structures allows generic structural sections, wing boxes and bodies, to be pre-defined. Hence with only gross dimensions, thicknesses, material properties and pre-defined boundary conditions a complete model of an aircraft can be created.

  20. Machine learning-based automatic detection of pulmonary trunk

    NASA Astrophysics Data System (ADS)

    Wu, Hong; Deng, Kun; Liang, Jianming

    2011-03-01

    Pulmonary embolism is a common cardiovascular emergency with about 600,000 cases occurring annually and causing approximately 200,000 deaths in the US. CT pulmonary angiography (CTPA) has become the reference standard for PE diagnosis, but the interpretation of these large image datasets is made complex and time consuming by the intricate branching structure of the pulmonary vessels, a myriad of artifacts that may obscure or mimic PEs, and suboptimal bolus of contrast and inhomogeneities with the pulmonary arterial blood pool. To meet this challenge, several approaches for computer aided diagnosis of PE in CTPA have been proposed. However, none of these approaches is capable of detecting central PEs, distinguishing the pulmonary artery from the vein to effectively remove any false positives from the veins, and dynamically adapting to suboptimal contrast conditions associated the CTPA scans. To overcome these shortcomings, it requires highly efficient and accurate identification of the pulmonary trunk. For this very purpose, in this paper, we present a machine learning based approach for automatically detecting the pulmonary trunk. Our idea is to train a cascaded AdaBoost classifier with a large number of Haar features extracted from CTPA image samples, so that the pulmonary trunk can be automatically identified by sequentially scanning the CTPA images and classifying each encountered sub-image with the trained classifier. Our approach outperforms an existing anatomy-based approach, requiring no explicit representation of anatomical knowledge and achieving a nearly 100% accuracy tested on a large number of cases.

  1. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  2. Automatic Tooth Segmentation of Dental Mesh Based on Harmonic Fields.

    PubMed

    Liao, Sheng-hui; Liu, Shi-jian; Zou, Bei-ji; Ding, Xi; Liang, Ye; Huang, Jun-hui

    2015-01-01

    An important preprocess in computer-aided orthodontics is to segment teeth from the dental models accurately, which should involve manual interactions as few as possible. But fully automatic partition of all teeth is not a trivial task, since teeth occur in different shapes and their arrangements vary substantially from one individual to another. The difficulty is exacerbated when severe teeth malocclusion and crowding problems occur, which is a common occurrence in clinical cases. Most published methods in this area either are inaccurate or require lots of manual interactions. Motivated by the state-of-the-art general mesh segmentation methods that adopted the theory of harmonic field to detect partition boundaries, this paper proposes a novel, dental-targeted segmentation framework for dental meshes. With a specially designed weighting scheme and a strategy of a priori knowledge to guide the assignment of harmonic constraints, this method can identify teeth partition boundaries effectively. Extensive experiments and quantitative analysis demonstrate that the proposed method is able to partition high-quality teeth automatically with robustness and efficiency.

  3. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    PubMed Central

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  4. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  5. Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.

    PubMed

    Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V

    2013-09-10

    The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model.

  6. A CNN based Hybrid approach towards automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal V.; Katiyar, Sunil K.

    2013-06-01

    Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i

  7. Modeling of a data exchange process in the Automatic Process Control System on the base of the universal SCADA-system

    NASA Astrophysics Data System (ADS)

    Topolskiy, D.; Topolskiy, N.; Solomin, E.; Topolskaya, I.

    2016-04-01

    In the present paper the authors discuss some ways of solving energy saving problems in mechanical engineering. In authors' opinion one of the ways of solving this problem is integrated modernization of power engineering objects of mechanical engineering companies, which should be intended for the energy supply control efficiency increase and electric energy commercial accounting improvement. The author have proposed the usage of digital current and voltage transformers for these purposes. To check the compliance of this equipment with the IEC 61850 International Standard, we have built a mathematic model of the data exchange process between measuring transformers and a universal SCADA-system. The results of modeling show that the discussed equipment corresponds to the mentioned Standard requirements and the usage of the universal SCADA-system for these purposes is preferable and economically reasonable. In modeling the authors have used the following software: MasterScada, Master OPC_DI_61850, OPNET.

  8. Towards Automatic Semantic Labelling of 3D City Models

    NASA Astrophysics Data System (ADS)

    Rook, M.; Biljecki, F.; Diakité, A. A.

    2016-10-01

    The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.

  9. Automatic Construction of Anomaly Detectors from Graphical Models

    SciTech Connect

    Ferragut, Erik M; Darmon, David M; Shue, Craig A; Kelley, Stephen

    2011-01-01

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.

  10. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  11. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  12. Enhancing Automaticity through Task-Based Language Learning

    ERIC Educational Resources Information Center

    De Ridder, Isabelle; Vangehuchten, Lieve; Gomez, Marta Sesena

    2007-01-01

    In general terms automaticity could be defined as the subconscious condition wherein "we perform a complex series of tasks very quickly and efficiently, without having to think about the various components and subcomponents of action involved" (DeKeyser 2001: 125). For language learning, Segalowitz (2003) characterised automaticity as a…

  13. Search-matching algorithm for acoustics-based automatic sniper localization

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    Most of modern automatic sniper localization systems are based on the utilization of the acoustical emissions produced by the gun fire events. In order to estimate the spatial coordinates of the sniper location, these systems measures the time delay of arrival of the acoustical shock wave fronts to a microphone array. In more advanced systems, model based estimation of the nonlinear distortion parameters of the N-waves is used to make projectile trajectory and calibre estimations. In this work we address the sniper localization problem using a model based search-matching approach. The automatic sniper localization algorithm works searching for the acoustics model of ballistic shock waves which best matches the measured data. For this purpose, we implement a previously released acoustics model of ballistic shock waves. Further, the sniper location, the projectile trajectory and calibre, and the muzzle velocity are regarded as the inputs variables of such a model. A search algorithm is implemented in order to found what combination of the input variables minimize a fitness function defined as the distance between measured and simulated data. In such a way, the sniper location, the projectile trajectory and calibre, and the muzzle velocity can be found. In order to evaluate the performance of the algorithm, we conduct computer based experiments using simulated gunfire event data calculated at the nodes of a virtual distributed sensor network. Preliminary simulation results are quite promising showing fast convergence of the algorithm and good localization accuracy.

  14. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    SciTech Connect

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  15. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline

    PubMed Central

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  16. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.

    PubMed

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases.

  17. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  18. Power-based Shift Schedule for Pure Electric Vehicle with a Two-speed Automatic Transmission

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqi; Liu, Yanfang; Liu, Qiang; Xu, Xiangyang

    2016-11-01

    This paper introduces a comprehensive shift schedule for a two-speed automatic transmission of pure electric vehicle. Considering about driving ability and efficiency performance of electric vehicles, the power-based shift schedule is proposed with three principles. This comprehensive shift schedule regards the vehicle current speed and motor load power as input parameters to satisfy the vehicle driving power demand with lowest energy consumption. A simulation model has been established to verify the dynamic and economic performance of comprehensive shift schedule. Compared with traditional dynamic and economic shift schedules, simulation results indicate that the power-based shift schedule is superior to traditional shift schedules.

  19. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    SciTech Connect

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  20. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  1. Automatic target validation based on neuroscientific literature mining for tractography.

    PubMed

    Vasques, Xavier; Richardet, Renaud; Hill, Sean L; Slater, David; Chappelier, Jean-Cedric; Pralong, Etienne; Bloch, Jocelyne; Draganski, Bogdan; Cif, Laura

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/.

  2. Automatic target validation based on neuroscientific literature mining for tractography

    PubMed Central

    Vasques, Xavier; Richardet, Renaud; Hill, Sean L.; Slater, David; Chappelier, Jean-Cedric; Pralong, Etienne; Bloch, Jocelyne; Draganski, Bogdan; Cif, Laura

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/. PMID

  3. Automatic QSAR modeling of ADME properties: blood-brain barrier penetration and aqueous solubility.

    PubMed

    Obrezanova, Olga; Gola, Joelle M R; Champness, Edmund J; Segall, Matthew D

    2008-01-01

    In this article, we present an automatic model generation process for building QSAR models using Gaussian Processes, a powerful machine learning modeling method. We describe the stages of the process that ensure models are built and validated within a rigorous framework: descriptor calculation, splitting data into training, validation and test sets, descriptor filtering, application of modeling techniques and selection of the best model. We apply this automatic process to data sets of blood-brain barrier penetration and aqueous solubility and compare the resulting automatically generated models with 'manually' built models using external test sets. The results demonstrate the effectiveness of the automatic model generation process for two types of data sets commonly encountered in building ADME QSAR models, a small set of in vivo data and a large set of physico-chemical data.

  4. Automatic QSAR modeling of ADME properties: blood brain barrier penetration and aqueous solubility

    NASA Astrophysics Data System (ADS)

    Obrezanova, Olga; Gola, Joelle M. R.; Champness, Edmund J.; Segall, Matthew D.

    2008-06-01

    In this article, we present an automatic model generation process for building QSAR models using Gaussian Processes, a powerful machine learning modeling method. We describe the stages of the process that ensure models are built and validated within a rigorous framework: descriptor calculation, splitting data into training, validation and test sets, descriptor filtering, application of modeling techniques and selection of the best model. We apply this automatic process to data sets of blood-brain barrier penetration and aqueous solubility and compare the resulting automatically generated models with `manually' built models using external test sets. The results demonstrate the effectiveness of the automatic model generation process for two types of data sets commonly encountered in building ADME QSAR models, a small set of in vivo data and a large set of physico-chemical data.

  5. Acoustical model of small calibre ballistic shock waves in air for automatic sniper localization applications

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    The phenomenon of ballistic shock wave emission by a small calibre projectile at supersonic speed is quite relevant in automatic sniper localization applications. When available, ballistic shock wave analysis makes possible the estimation of the main ballistic features of a gunfire event. The propagation of ballistic shock waves in air is a process which mainly involves nonlinear distortion, or steepening, and atmospheric absorption. Current ballistic shock waves propagation models used in automatic sniper localization systems only consider nonlinear distortion effects. This means that only the rates of change of shock peak pressure and the N-wave duration with distance are considered in the determination of the miss distance. In the present paper we present an improved acoustical model of small calibre ballistic shock wave propagation in air, intended to be used in acoustics-based automatic sniper localization applications. In our approach, we have considered nonlinear distortion, but additionally we have also introduced the effects of atmospheric sound absorption. Atmospheric absorption is implemented in the time domain in order to get faster calculation times than those computed in frequency domain. Furthermore, we take advantage of the fact that atmospheric absorption plays a fundamental role in the rise times of the shocks, and introduce the rate of change of the rise time with distance as a third parameter to be used in the determination of the miss distance. This lead us to a more accurate and robust estimation of the miss distance, and consequently of the projectile trajectory, and the spatial coordinates of the gunshot origin.

  6. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  7. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  8. Automatic Cell Phone Menu Customization Based on User Operation History

    NASA Astrophysics Data System (ADS)

    Fukazawa, Yusuke; Hara, Mirai; Ueno, Hidetoshi

    Mobile devices are becoming more and more difficult to use due to the sheer number of functions now supported. In this paper, we propose a menu customization system that ranks functions so as to make interesting functions including both frequently used and functions that are infrequently used but have the potential to satisfy the user, easy to access. Concretely, we define the features of the phone's functions by extracting keywords from the manufacturer's manual, and propose a method that uses the Ranking SVM (Support Vector Machine) to rank the functions based on user's operation history. We conduct a home-use test for one week to evaluate the efficiency of customization and the usability of menu customization. The results of this test show that the average rank at the last day was half that of the first day, and that the user could find, on average, 3.14 more kinds of new functions, ones that the user did not know about before the test, on a daily basis. This shows that the proposed customized menu supports the user by making it easier to access frequent items and to find new interesting functions. From interviews, almost 70% of the users were satisfied with the ranking provided by menu customization as well as the usability of the resulting menus. In addition, interviews show that automatic cell phone menu customization is more appropriate for mobile phone beginners than expert users.

  9. Modeling and Automatic Feedback Control of Tremor: Adaptive Estimation of Deep Brain Stimulation

    PubMed Central

    Rehan, Muhammad; Hong, Keum-Shik

    2013-01-01

    This paper discusses modeling and automatic feedback control of (postural and rest) tremor for adaptive-control-methodology-based estimation of deep brain stimulation (DBS) parameters. The simplest linear oscillator-based tremor model, between stimulation amplitude and tremor, is investigated by utilizing input-output knowledge. Further, a nonlinear generalization of the oscillator-based tremor model, useful for derivation of a control strategy involving incorporation of parametric-bound knowledge, is provided. Using the Lyapunov method, a robust adaptive output feedback control law, based on measurement of the tremor signal from the fingers of a patient, is formulated to estimate the stimulation amplitude required to control the tremor. By means of the proposed control strategy, an algorithm is developed for estimation of DBS parameters such as amplitude, frequency and pulse width, which provides a framework for development of an automatic clinical device for control of motor symptoms. The DBS parameter estimation results for the proposed control scheme are verified through numerical simulations. PMID:23638163

  10. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  11. Mindfulness-Based Parent Training: Strategies to Lessen the Grip of Automaticity in Families with Disruptive Children

    ERIC Educational Resources Information Center

    Dumas, Jean E.

    2005-01-01

    Disagreements and conflicts in families with disruptive children often reflect rigid patterns of behavior that have become overlearned and automatized with repeated practice. These patterns are mindless: They are performed with little or no awareness and are highly resistant to change. This article introduces a new, mindfulness-based model of…

  12. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  13. Automatic Recommendations for E-Learning Personalization Based on Web Usage Mining Techniques and Information Retrieval

    ERIC Educational Resources Information Center

    Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa

    2009-01-01

    In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…

  14. Automatic calibration of dial gauges based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Feng, Haiping; Kong, Ming

    2008-10-01

    Against the image characteristics of dial gauges, an automatic detection system of dial gauges is designed and implemented by using the technology of computer vision technology and digital image processing methods. Improved image subtraction method and adaptive threshold segmentation method is used for previous processing; a new method named as region-segmentation is proposed to partition the dial image, only the useful blocks of the dial image is processed no the other area, this method reduces the computation amount greatly, and improves the processing speed effectively. This method has been applied in the automatic detection system of dial gauges, which makes it possible for the detection of dial gauges to be finished intelligent, automatically and rapidly.

  15. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  16. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  17. Automatic orientation and 3D modelling from markerless rock art imagery

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  18. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  19. Thesaurus-Based Automatic Indexing: A Study of Indexing Failure.

    ERIC Educational Resources Information Center

    Caplan, Priscilla Louise

    This study examines automatic indexing performed with a manually constructed thesaurus on a document collection of titles and abstracts of library science master's papers. Errors are identified when the meaning of a posted descriptor, as identified by context in the thesaurus, does not match that of the passage of text which occasioned the…

  20. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis

    DTIC Science & Technology

    1989-08-01

    Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17

  1. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  2. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  3. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  4. Study of burn scar extraction automatically based on level set method using remote sensing data.

    PubMed

    Liu, Yang; Dai, Qin; Liu, Jianbo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model.

  5. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  6. Automatic vertebral identification using surface-based registration

    NASA Astrophysics Data System (ADS)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  7. Automatic Parallelization Using OpenMP Based on STL Semantics

    SciTech Connect

    Liao, C; Quinlan, D J; Willcock, J J; Panas, T

    2008-06-03

    Automatic parallelization of sequential applications using OpenMP as a target has been attracting significant attention recently because of the popularity of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high level abstractions such as STL containers are largely ignored due to the lack of research compilers that are readily able to recognize high level object-oriented abstractions of STL. In this paper, we use ROSE, a multiple-language source-to-source compiler infrastructure, to build a parallelizer that can recognize such high level semantics and parallelize C++ applications using certain STL containers. The idea of our work is to automatically insert OpenMP constructs using extended conventional dependence analysis and the known domain-specific semantics of high-level abstractions with optional assistance from source code annotations. In addition, the parallelizer is followed by an OpenMP translator to translate the generated OpenMP programs into multi-threaded code targeted to a popular OpenMP runtime library. Our work extends the applicability of automatic parallelization and provides another way to take advantage of multicore processors.

  8. System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator

    DTIC Science & Technology

    2006-08-01

    System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator Jae-Jun Kim∗ and Brij N. Agrawal † Department of...TITLE AND SUBTITLE System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator 5a. CONTRACT NUMBER 5b...and Dynamics, Vol. 20, No. 4, July-August 1997, pp. 625-632. 6Schwartz, J. L. and Hall, C. D., “ System Identification of a Spherical Air-Bearing

  9. Modeling and Prototyping of Automatic Clutch System for Light Vehicles

    NASA Astrophysics Data System (ADS)

    Murali, S.; Jothi Prakash, V. M.; Vishal, S.

    2017-03-01

    Nowadays, recycling or regenerating the waste in to something useful is appreciated all around the globe. It reduces greenhouse gas emissions that contribute to global climate change. This study deals with provision of the automatic clutch mechanism in vehicles to facilitate the smooth changing of gears. This study proposed to use the exhaust gases which are normally expelled out as a waste from the turbocharger to actuate the clutch mechanism in vehicles to facilitate the smooth changing of gears. At present, clutches are operated automatically by using an air compressor in the four wheelers. In this study, a conceptual design is proposed in which the clutch is operated by the exhaust gas from the turbocharger and this will remove the usage of air compressor in the existing system. With this system, usage of air compressor is eliminated and the riders need not to operate the clutch manually. This work involved in development, analysation and validation of the conceptual design through simulation software. Then the developed conceptual design of an automatic pneumatic clutch system is tested with proto type.

  10. Automatic Sleep Scoring Based on Modular Rule-Based Reasoning Units and Signal Processing Units

    DTIC Science & Technology

    2007-11-02

    scoring, rule-based reasoning, multi-staged I. INTRODUCTION Integrated analysis on the state of sleep through Polysomnography is crucial for...diagnosis for sleep related disease. But conventional analog-type Polysomnography systems need tremendous amount of papers and much labor of trained expert...In this sense to equip digital Polysomnography and its following automatic analysis system became trend. In the sleep analysis, sleep stage scoring is

  11. Migration Based Event Detection and Automatic P- and S-Phase Picking in Hengill, Southwest Iceland

    NASA Astrophysics Data System (ADS)

    Wagner, F.; Tryggvason, A.; Gudmundsson, O.; Roberts, R.; Bodvarsson, R.; Fehler, M.

    2015-12-01

    Automatic detection of seismic events is a complicated process. Common procedures depend on the detection of seismic phases (e.g. P and S) in single trace analyses and their correct association with locatable point sources. The event detection threshold is thus directly related to the single trace detection threshold. Highly sensitive phase detectors detect low signal-to-noise ratio (S/N) phases but also produce a low percentage of locatable events. Short inter-event times of only a few seconds, which is not uncommon during seismic or volcanic crises, is a complication to any event association algorithm. We present an event detection algorithm based on seismic migration of trace attributes into an a-priori three-dimensional (3D) velocity model. We evaluate its capacity as automatic detector compared to conventional methods. Detecting events using seismic migration removes the need for phase association. The event detector runs on a stack of time shifted traces, which increases S/N and thus allows for a low detection threshold. Detected events come with an origin time and a location estimate, enabling a focused trace analysis, including P- and S-phase recognition, to discard false detections and build a basis for accurate automatic phase picking. We apply the migration based detection algorithm to data from a semi-permanent seismic network at Hengill, an active volcanic region with several geothermal production sites in southwest Iceland. The network includes 26 stations with inter-station distances down to 5 km. Results show a high success rate compared to the manually picked catalogue (up to 90% detected). New detections, that were missed by the standard detection routine, show a generally good ratio of true to false alarms, i.e. most of the new events are locatable.

  12. Electroporation-based treatment planning for deep-seated tumors based on automatic liver segmentation of MRI images.

    PubMed

    Pavliha, Denis; Mušič, Maja M; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required.

  13. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin

    2012-03-01

    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  14. Template-based automatic extraction of the joint space of foot bones from CT scan

    NASA Astrophysics Data System (ADS)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  15. One-Day Offset between Simulated and Observed Daily Hydrographs: An Exploration of the Issue in Automatic Model Calibration

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Leon, L.; Yang, W.

    2014-12-01

    The literature of hydrologic modelling shows that in daily simulation of the rainfall-runoff relationship, the simulated hydrograph response to some rainfall events happens one day earlier than the observed one. This one-day offset issue results in significant residuals between the simulated and observed hydrographs and adversely impacts the model performance metrics that are based on the aggregation of daily residuals. Based on the analysis of sub-daily rainfall and runoff data sets in this study, the one-day offset issue appears to be inevitable when the same time interval, e.g. the calendar day, is used to measure daily rainfall and runoff data sets. This is an error introduced through data aggregation and needs to be properly addressed before calculating the model performance metrics. Otherwise, the metrics would not represent the modelling quality and could mislead the automatic calibration of the model. In this study, an algorithm is developed to scan the simulated hydrograph against the observed one, automatically detect all one-day offset incidents and shift the simulated hydrograph of those incidents one day forward before calculating the performance metrics. This algorithm is employed in the automatic calibration of the Soil and Water Assessment Tool that is set up for the Rouge River watershed in Southern Ontario, Canada. Results show that with the proposed algorithm, the automatic calibration to maximize the daily Nash-Sutcliffe (NS) identifies a solution that accurately estimates the magnitude of peak flow rates and the shape of rising and falling limbs of the observed hydrographs. But, without the proposed algorithm, the same automatic calibration finds a solution that systematically underestimates the peak flow rates in order to perfectly match the timing of simulated and observed peak flows.

  16. Automatic ultrasonic breast lesions detection using support vector machine based algorithm

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuang; Miao, Shan-Jung; Fan, Wei-Che; Chen, Yung-Sheng

    2007-03-01

    It is difficult to automatically detect tumors and extract lesion boundaries in ultrasound images due to the variance in shape, the interference from speckle noise, and the low contrast between objects and background. The enhancement of ultrasonic image becomes a significant task before performing lesion classification, which was usually done with manual delineation of the tumor boundaries in the previous works. In this study, a linear support vector machine (SVM) based algorithm is proposed for ultrasound breast image training and classification. Then a disk expansion algorithm is applied for automatically detecting lesions boundary. A set of sub-images including smooth and irregular boundaries in tumor objects and those in speckle-noised background are trained by the SVM algorithm to produce an optimal classification function. Based on this classification model, each pixel within an ultrasound image is classified into either object or background oriented pixel. This enhanced binary image can highlight the object and suppress the speckle noise; and it can be regarded as degraded paint character (DPC) image containing closure noise, which is well known in perceptual organization of psychology. An effective scheme of removing closure noise using iterative disk expansion method has been successfully demonstrated in our previous works. The boundary detection of ultrasonic breast lesions can be further equivalent to the removal of speckle noise. By applying the disk expansion method to the binary image, we can obtain a significant radius-based image where the radius for each pixel represents the corresponding disk covering the specific object information. Finally, a signal transmission process is used for searching the complete breast lesion region and thus the desired lesion boundary can be effectively and automatically determined. Our algorithm can be performed iteratively until all desired objects are detected. Simulations and clinical images were introduced to

  17. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    PubMed

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  18. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

    PubMed Central

    Guo, Yu; Feng, Yuanming; Sun, Jian; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum. PMID:24987451

  19. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  20. Implementation of a microcontroller-based semi-automatic coagulator.

    PubMed

    Chan, K; Kirumira, A; Elkateeb, A

    2001-01-01

    The coagulator is an instrument used in hospitals to detect clot formation as a function of time. Generally, these coagulators are very expensive and therefore not affordable by a doctors' office and small clinics. The objective of this project is to design and implement a low cost semi-automatic coagulator (SAC) prototype. The SAC is capable of assaying up to 12 samples and can perform the following tests: prothrombin time (PT), activated partial thromboplastin time (APTT), and PT/APTT combination. The prototype has been tested successfully.

  1. Evaluation of Automatic Atlas-Based Lymph Node Segmentation for Head-and-Neck Cancer

    SciTech Connect

    Stapleford, Liza J.; Lawson, Joshua D.; Perkins, Charles; Edelman, Scott; Davis, Lawrence

    2010-07-01

    Purpose: To evaluate if automatic atlas-based lymph node segmentation (LNS) improves efficiency and decreases inter-observer variability while maintaining accuracy. Methods and Materials: Five physicians with head-and-neck IMRT experience used computed tomography (CT) data from 5 patients to create bilateral neck clinical target volumes covering specified nodal levels. A second contour set was automatically generated using a commercially available atlas. Physicians modified the automatic contours to make them acceptable for treatment planning. To assess contour variability, the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was used to take collections of contours and calculate a probabilistic estimate of the 'true' segmentation. Differences between the manual, automatic, and automatic-modified (AM) contours were analyzed using multiple metrics. Results: Compared with the 'true' segmentation created from manual contours, the automatic contours had a high degree of accuracy, with sensitivity, Dice similarity coefficient, and mean/max surface disagreement values comparable to the average manual contour (86%, 76%, 3.3/17.4 mm automatic vs. 73%, 79%, 2.8/17 mm manual). The AM group was more consistent than the manual group for multiple metrics, most notably reducing the range of contour volume (106-430 mL manual vs. 176-347 mL AM) and percent false positivity (1-37% manual vs. 1-7% AM). Average contouring time savings with the automatic segmentation was 11.5 min per patient, a 35% reduction. Conclusions: Using the STAPLE algorithm to generate 'true' contours from multiple physician contours, we demonstrated that, in comparison with manual segmentation, atlas-based automatic LNS for head-and-neck cancer is accurate, efficient, and reduces interobserver variability.

  2. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  3. Automatic identification of sources and trajectories of atmospheric Saharan dust aerosols with Latent Gaussian Models

    NASA Astrophysics Data System (ADS)

    Garbe, Christoph; Bachl, Fabian

    2013-04-01

    Dust transported from the Sahara across the ocean has a high impact on radiation fluxes and marine nutrient cycles. Significant progress has been made in characterising Saharan dust properties (Formenti et al., 2011) and its radiative effects through the 'SAharan Mineral dUst experiMent' (SAMUM) (Ansmann et al., 2011). While the models simulating Saharan dust transport processes have been considerably improved in recent years, it is still an open question which meteorological processes and surface characteristics are mainly responsible for dust transported to the Sub-Tropical Atlantic (Schepanski et al., 2009; Tegen et al., 2012). Currently, there exists a large discrepancy between modelled dust emission events and those observed from satellites. In this contribution we present an approach for classifying and tracking dust plumes based on a Bayesian hierarchical model. Recent developments in computational statistics known as Integrated Nested Laplace Approximations (INLA) have paved the way for efficient inference in a respective subclass, the Generalized Linear Model (GLM) (Rue et al., 2009). We present the results of our approach based on data from the SIVIRI instrument on board the Meteosat Second Generation (MSG) satellite. We demonstrate the accuracy for automatically detecting sources of dust and aerosol concentrations in the atmosphere. The trajectories of aerosols are also computed very efficiently. In our framework, we automatically identify optimal parameters for the computation of atmospheric aerosol motion. The applicability of our approach to a wide range of conditions will be discussed, as well as the ground truthing of our results and future directions in this field of research.

  4. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    NASA Astrophysics Data System (ADS)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.

    2012-04-01

    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold

  5. Fully Automatic Guidance and Control for Rotorcraft Nap-of-the-earth Flight Following Planned Profiles. Volume 2: Mathematical Model

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  6. Environmental monitoring based on automatic change detection from remotely sensed data: kernel-based approach

    NASA Astrophysics Data System (ADS)

    Shah-Hosseini, Reza; Homayouni, Saeid; Safari, Abdolreza

    2015-01-01

    In the event of a natural disaster, such as a flood or earthquake, using fast and efficient methods for estimating the extent of the damage is critical. Automatic change mapping and estimating are important in order to monitor environmental changes, e.g., deforestation. Traditional change detection (CD) approaches are time consuming, user dependent, and strongly influenced by noise and/or complex spectral classes in a region. Change maps obtained by these methods usually suffer from isolated changed pixels and have low accuracy. To deal with this, an automatic CD framework-which is based on the integration of change vector analysis (CVA) technique, kernel-based C-means clustering (KCMC), and kernel-based minimum distance (KBMD) classifier-is proposed. In parallel with the proposed algorithm, a support vector machine (SVM) CD method is presented and analyzed. In the first step, a differential image is generated via two approaches in high dimensional Hilbert space. Next, by using CVA and automatically determining a threshold, the pseudo-training samples of the change and no-change classes are extracted. These training samples are used for determining the initial value of KCMC parameters and training the SVM-based CD method. Then optimizing a cost function with the nature of geometrical and spectral similarity in the kernel space is employed in order to estimate the KCMC parameters and to select the precise training samples. These training samples are used to train the KBMD classifier. Last, the class label of each unknown pixel is determined using the KBMD classifier and SVM-based CD method. In order to evaluate the efficiency of the proposed algorithm for various remote sensing images and applications, two different datasets acquired by Quickbird and Landsat TM/ETM+ are used. The results show a good flexibility and effectiveness of this automatic CD method for environmental change monitoring. In addition, the comparative analysis of results from the proposed method

  7. Spectral phase-based automatic calibration scheme for swept source-based optical coherence tomography systems

    NASA Astrophysics Data System (ADS)

    Ratheesh, K. M.; Seah, L. K.; Murukeshan, V. M.

    2016-11-01

    The automatic calibration in Fourier-domain optical coherence tomography (FD-OCT) systems allows for high resolution imaging with precise depth ranging functionality in many complex imaging scenarios, such as microsurgery. However, the accuracy and speed of the existing automatic schemes are limited due to the functional approximations and iterative operations used in their procedures. In this paper, we present a new real-time automatic calibration scheme for swept source-based optical coherence tomography (SS-OCT) systems. The proposed automatic calibration can be performed during scanning operation and does not require an auxiliary interferometer for calibration signal generation and an additional channel for its acquisition. The proposed method makes use of the spectral component corresponding to the sample surface reflection as the calibration signal. The spectral phase function representing the non-linear sweeping characteristic of the frequency-swept laser source is determined from the calibration signal. The phase linearization with improved accuracy is achieved by normalization and rescaling of the obtained phase function. The fractional-time indices corresponding to the equidistantly spaced phase intervals are estimated directly from the resampling function and are used to resample the OCT signals. The proposed approach allows for precise calibration irrespective of the path length variation induced by the non-planar topography of the sample or galvo scanning. The conceived idea was illustrated using an in-house-developed SS-OCT system by considering the specular reflection from a mirror and other test samples. It was shown that the proposed method provides high-performance calibration in terms of axial resolution and sensitivity without increasing computational and hardware complexity.

  8. Automatic P-S phase picking procedure based on Kurtosis: Vanuatu region case study

    NASA Astrophysics Data System (ADS)

    Baillard, C.; Crawford, W. C.; Ballu, V.; Hibert, C.

    2012-12-01

    Automatic P and S phase picking is indispensable for large seismological data sets. Robust algorithms, based on short term and long term average ratio comparison (Allen, 1982), are commonly used for event detection, but further improvements can be made in phase identification and picking. We present a picking scheme using consecutively Kurtosis-derived Characteristic Functions (CF) and Eigenvalue decompositions on 3-component seismic data to independently pick P and S arrivals. When computed over a sliding window of the signal, a sudden increase in the CF reveals a transition from a gaussian to a non-gaussian distribution, characterizing the phase onset (Saragiotis, 2002). One advantage of the method is that it requires much fewer adjustable parameters than competing methods. We modified the Kurtosis CF to improve pick precision, by computing the CF over several frequency bandwidths, window sizes and smoothing parameters. Once phases were picked, we determined the onset type (P or S) using polarization parameters (rectilinearity, azimuth and dip) calculated using Eigenvalue decompositions of the covariance matrix (Cichowicz, 1993). Finally, we removed bad picks using a clustering procedure and the signal-to-noise ratio (SNR). The pick quality index was also assigned based on the SNR value. Amplitude calculation is integrated into the procedure to enable automatic magnitude calculation. We applied this procedure to data from a network of 30 wideband seismometers (including 10 oceanic bottom seismometers) in Vanuatu that ran for 10 months from May 2008 to February 2009. We manually picked the first 172 events of June, whose local magnitudes range from 0.7 to 3.7. We made a total of 1601 picks, 1094 P and 507 S. We then applied our automatic picking to the same dataset. 70% of the manually picked onsets were picked automatically. For P-picks, the difference between manual and automatic picks is 0.01 ± 0.08 s overall; for the best quality picks (quality index 0: 64

  9. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-21

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of [Formula: see text], yielding a mean Dice similarity coefficient of [Formula: see text], and an average symmetric surface distance of [Formula: see text] mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  10. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution

    NASA Astrophysics Data System (ADS)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-01

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  11. Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation.

    PubMed

    Yu, Yanyan; Chen, Yimin; Chiu, Bernard

    2016-07-01

    Prostate segmentation from transrectal ultrasound (TRUS) images plays an important role in the diagnosis and treatment planning of prostate cancer. In this paper, a fully automatic slice-based segmentation method was developed to segment TRUS prostate images. The initial prostate contour was determined using a novel method based on the radial bas-relief (RBR) method, and a false edge removal algorithm proposed here in. 2D slice-based propagation was used in which the contour on each image slice was deformed using a level-set evolution model, which was driven by edge-based and region-based energy fields generated by dyadic wavelet transform. The optimized contour on an image slice propagated to the adjacent slice, and subsequently deformed using the level-set model. The propagation continued until all image slices were segmented. To determine the initial slice where the propagation began, the initial prostate contour was deformed individually on each transverse image. A method was developed to self-assess the accuracy of the deformed contour based on the average image intensity inside and outside of the contour. The transverse image on which highest accuracy was attained was chosen to be the initial slice for the propagation process. Evaluation was performed for 336 transverse images from 15 prostates that include images acquired at mid-gland, base and apex regions of the prostates. The average mean absolute difference (MAD) between algorithm and manual segmentations was 0.79±0.26mm, which is comparable to results produced by previously published semi-automatic segmentation methods. Statistical evaluation shows that accurate segmentation was not only obtained at the mid-gland, but also at the base and apex regions.

  12. Patch-based label fusion for automatic multi-atlas-based prostate segmentation in MR images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Jani, Ashesh B.; Rossi, Peter J.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    In this paper, we propose a 3D multi-atlas-based prostate segmentation method for MR images, which utilizes patch-based label fusion strategy. The atlases with the most similar appearance are selected to serve as the best subjects in the label fusion. A local patch-based atlas fusion is performed using voxel weighting based on anatomical signature. This segmentation technique was validated with a clinical study of 13 patients and its accuracy was assessed using the physicians' manual segmentations (gold standard). Dice volumetric overlapping was used to quantify the difference between the automatic and manual segmentation. In summary, we have developed a new prostate MR segmentation approach based on nonlocal patch-based label fusion, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

  13. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  14. Mapping of Planetary Surface Age Based on Crater Statistics Obtained by AN Automatic Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Salih, A. L.; Mühlbauer, M.; Grumpe, A.; Pasckert, J. H.; Wöhler, C.; Hiesinger, H.

    2016-06-01

    The analysis of the impact crater size-frequency distribution (CSFD) is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly) global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to 2.9 Ga) and higher

  15. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  16. Results of Flight Test of an Automatically Stabilized Model C (Swept Back) Four-Wing Tiamat

    NASA Technical Reports Server (NTRS)

    Seacord, Charles L., Jr.; Teitelbaum, J. M.

    1947-01-01

    The results of the first flight test of a swept-back four-wing version of Tiamat (MX-570 model C) which was launched at the NACA Pilotless Aircraft Research Station at W4110PB Island, Va. are presented. In general, the flight behavior was close to that predicted by calculations based an stability theory and oscillating table tests of the autopilot. The flight test thus indicates that the techniques employed to predict automatic stability are valid and practical from an operational viewpoint. The limitations of the method used to predict flight behavior arise from the fact that the calculations assume no coupling among roll, pitch, and yaw, while in actual flight some such coupling does exist.

  17. Detection and classification of football players with automatic generation of models

    NASA Astrophysics Data System (ADS)

    Gómez, Jorge R.; Jaraba, Elias Herrero; Montañés, Miguel Angel; Contreras, Francisco Martínez; Uruñuela, Carlos Orrite

    2010-01-01

    We focus on the automatic detection and classification of players in a football match. Our approach is not based on any a priori knowledge of the outfits, but on the assumption that the two main uniforms detected correspond to the two football teams. The algorithm is designed to be able to operate in real time, once it has been trained, and is able to detect partially occluded players and update the color of the kits to cope with some gradual illumination changes through time. Our method, evaluated from real sequences, gave better detection and classification results than those obtained by a system using a manual selection of samples to compute a Gaussian mixture model.

  18. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  19. Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.

    PubMed

    Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J

    2010-01-01

    We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.

  20. Smart-card-based automatic meal record system intervention tool for analysis using data mining approach.

    PubMed

    Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro

    2010-04-01

    The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool.

  1. Automatic Trading Agent. RMT Based Portfolio Theory and Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Snarska, M.; Krzych, J.

    2006-11-01

    Portfolio theory is a very powerful tool in the modern investment theory. It is helpful in estimating risk of an investor's portfolio, arosen from lack of information, uncertainty and incomplete knowledge of reality, which forbids a perfect prediction of future price changes. Despite of many advantages this tool is not known and not widely used among investors on Warsaw Stock Exchange. The main reason for abandoning this method is a high level of complexity and immense calculations. The aim of this paper is to introduce an automatic decision-making system, which allows a single investor to use complex methods of Modern Portfolio Theory (MPT). The key tool in MPT is an analysis of an empirical covariance matrix. This matrix, obtained from historical data, biased by such a high amount of statistical uncertainty, that it can be seen as random. By bringing into practice the ideas of Random Matrix Theory (RMT), the noise is removed or significantly reduced, so the future risk and return are better estimated and controlled. These concepts are applied to the Warsaw Stock Exchange Simulator {http://gra.onet.pl}. The result of the simulation is 18% level of gains in comparison with respective 10% loss of the Warsaw Stock Exchange main index WIG.

  2. Automatic coregistration of volumetric images based on implanted fiducial markers.

    PubMed

    Koch, Martin; Maltz, Jonathan S; Belongie, Serge J; Gangadharan, Bijumon; Bose, Supratik; Shukla, Himanshu; Bani-Hashemi, Ali R

    2008-10-01

    The accurate delivery of external beam radiation therapy is often facilitated through the implantation of radio-opaque fiducial markers (gold seeds). Before the delivery of each treatment fraction, seed positions can be determined via low dose volumetric imaging. By registering these seed locations with the corresponding locations in the previously acquired treatment planning computed tomographic (CT) scan, it is possible to adjust the patient position so that seed displacement is accommodated. The authors present an unsupervised automatic algorithm that identifies seeds in both planning and pretreatment images and subsequently determines a rigid geometric transformation between the two sets. The algorithm is applied to the imaging series of ten prostate cancer patients. Each test series is comprised of a single multislice planning CT and multiple megavoltage conebeam (MVCB) images. Each MVCB dataset is obtained immediately prior to a subsequent treatment session. Seed locations were determined to within 1 mm with an accuracy of 97 +/- 6.1% for datasets obtained by application of a mean imaging dose of 3.5 cGy per study. False positives occurred in three separate instances, but only when datasets were obtained at imaging doses too low to enable fiducial resolution by a human operator, or when the prostate gland had undergone large displacement or significant deformation. The registration procedure requires under nine seconds of computation time on a typical contemporary computer workstation.

  3. Research on automatic optimization of ground control points in image geometric rectification based on Voronoi diagram

    NASA Astrophysics Data System (ADS)

    Li, Ying; Cheng, Bo

    2009-10-01

    With the development of remote sensing satellites, the data quantity of remote sensing image is increasing tremendously, which brings a huge workload to the image geometric rectification through manual ground control point (GCP) selections. GCP database is one of the effective methods to cut down manual operation. The GCP loaded from database is generally redundant, which may result in a rectification slowdown. How to automatically optimize these ground control points is a problem that should be resolved urgently. According to the basic theory of geometric rectification and the principle of GCP selection, this paper deeply comprehends some existing methods about automatic optimization of GCP, and puts forward a new method of automatic optimization of GCP based on voronoi diagram to filter ground control points from the overfull ones without manual subjectivity for better accuracy. The paper is organized as follows: First, it clarifies the basic theory of remote sensing image multinomial geometric rectification and the arithmetic of how to get the GCP error. Second, it particularly introduces the voronoi diagram including its origin, development and characteristics, especially the creating process. Third, considering the deficiencies of existing methods about automatic optimization of GCP, the paper presents the idea of applying voronoi diagram to filter GCP in order to complete automatic optimization. During this process, it advances the conception of single GCP's importance value based on voronoi diagram. Then by integrating the GCP error and GCP's importance value, the paper gives the theory and the flow of automatic optimization of GCPs as well. It also presents an example of the application of this method. In the conclusion, it points out the advantages of automatic optimization of GCP based on the voronoi diagram.

  4. Automatic classification of sleep stages based on the time-frequency image of EEG signals.

    PubMed

    Bajaj, Varun; Pachori, Ram Bilas

    2013-12-01

    In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals.

  5. A Telesurveillance System With Automatic Electrocardiogram Interpretation Based on Support Vector Machine and Rule-Based Processing

    PubMed Central

    Lin, Ching-Miao; Lai, Feipei; Ho, Yi-Lwun; Hung, Chi-Sheng

    2015-01-01

    Background Telehealth care is a global trend affecting clinical practice around the world. To mitigate the workload of health professionals and provide ubiquitous health care, a comprehensive surveillance system with value-added services based on information technologies must be established. Objective We conducted this study to describe our proposed telesurveillance system designed for monitoring and classifying electrocardiogram (ECG) signals and to evaluate the performance of ECG classification. Methods We established a telesurveillance system with an automatic ECG interpretation mechanism. The system included: (1) automatic ECG signal transmission via telecommunication, (2) ECG signal processing, including noise elimination, peak estimation, and feature extraction, (3) automatic ECG interpretation based on the support vector machine (SVM) classifier and rule-based processing, and (4) display of ECG signals and their analyzed results. We analyzed 213,420 ECG signals that were diagnosed by cardiologists as the gold standard to verify the classification performance. Results In the clinical ECG database from the Telehealth Center of the National Taiwan University Hospital (NTUH), the experimental results showed that the ECG classifier yielded a specificity value of 96.66% for normal rhythm detection, a sensitivity value of 98.50% for disease recognition, and an accuracy value of 81.17% for noise detection. For the detection performance of specific diseases, the recognition model mainly generated sensitivity values of 92.70% for atrial fibrillation, 89.10% for pacemaker rhythm, 88.60% for atrial premature contraction, 72.98% for T-wave inversion, 62.21% for atrial flutter, and 62.57% for first-degree atrioventricular block. Conclusions Through connected telehealth care devices, the telesurveillance system, and the automatic ECG interpretation system, this mechanism was intentionally designed for continuous decision-making support and is reliable enough to reduce the

  6. BioASF: a framework for automatically generating executable pathway models specified in BioPAX

    PubMed Central

    Haydarlou, Reza; Jacobsen, Annika; Bonzanni, Nicola; Feenstra, K. Anton; Abeln, Sanne; Heringa, Jaap

    2016-01-01

    Motivation: Biological pathways play a key role in most cellular functions. To better understand these functions, diverse computational and cell biology researchers use biological pathway data for various analysis and modeling purposes. For specifying these biological pathways, a community of researchers has defined BioPAX and provided various tools for creating, validating and visualizing BioPAX models. However, a generic software framework for simulating BioPAX models is missing. Here, we attempt to fill this gap by introducing a generic simulation framework for BioPAX. The framework explicitly separates the execution model from the model structure as provided by BioPAX, with the advantage that the modelling process becomes more reproducible and intrinsically more modular; this ensures natural biological constraints are satisfied upon execution. The framework is based on the principles of discrete event systems and multi-agent systems, and is capable of automatically generating a hierarchical multi-agent system for a given BioPAX model. Results: To demonstrate the applicability of the framework, we simulated two types of biological network models: a gene regulatory network modeling the haematopoietic stem cell regulators and a signal transduction network modeling the Wnt/β-catenin signaling pathway. We observed that the results of the simulations performed using our framework were entirely consistent with the simulation results reported by the researchers who developed the original models in a proprietary language. Availability and Implementation: The framework, implemented in Java, is open source and its source code, documentation and tutorial are available at http://www.ibi.vu.nl/programs/BioASF. Contact: j.heringa@vu.nl PMID:27307645

  7. Controlling Retrieval during Practice: Implications for Memory-Based Theories of Automaticity

    ERIC Educational Resources Information Center

    Wilkins, Nicolas J.; Rawson, Katherine A.

    2011-01-01

    Memory-based processing theories of automaticity assume that shifts from algorithmic to retrieval-based processing underlie practice effects on response times. The current work examined the extent to which individuals can exert control over the involvement of retrieval during skill acquisition and the factors that may influence control. In two…

  8. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  9. Three Modeling Applications to Promote Automatic Item Generation for Examinations in Dentistry.

    PubMed

    Lai, Hollis; Gierl, Mark J; Byrne, B Ellen; Spielman, Andrew I; Waldschmidt, David M

    2016-03-01

    Test items created for dentistry examinations are often individually written by content experts. This approach to item development is expensive because it requires the time and effort of many content experts but yields relatively few items. The aim of this study was to describe and illustrate how items can be generated using a systematic approach. Automatic item generation (AIG) is an alternative method that allows a small number of content experts to produce large numbers of items by integrating their domain expertise with computer technology. This article describes and illustrates how three modeling approaches to item content-item cloning, cognitive modeling, and image-anchored modeling-can be used to generate large numbers of multiple-choice test items for examinations in dentistry. Test items can be generated by combining the expertise of two content specialists with technology supported by AIG. A total of 5,467 new items were created during this study. From substitution of item content, to modeling appropriate responses based upon a cognitive model of correct responses, to generating items linked to specific graphical findings, AIG has the potential for meeting increasing demands for test items. Further, the methods described in this study can be generalized and applied to many other item types. Future research applications for AIG in dental education are discussed.

  10. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  11. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    NASA Astrophysics Data System (ADS)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  12. Automatic vehicle detection based on automatic histogram-based fuzzy C-means algorithm and perceptual grouping using very high-resolution aerial imagery and road vector data

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Gökaşar, Ilgın

    2016-01-01

    This study presents an approach for the automatic detection of vehicles using very high-resolution images and road vector data. Initially, road vector data and aerial images are integrated to extract road regions. Then, the extracted road/street region is clustered using an automatic histogram-based fuzzy C-means algorithm, and edge pixels are detected using the Canny edge detector. In order to automatically detect vehicles, we developed a local perceptual grouping approach based on fusion of edge detection and clustering outputs. To provide the locality, an ellipse is generated using characteristics of the candidate clusters individually. Then, ratio of edge pixels to nonedge pixels in the corresponding ellipse is computed to distinguish the vehicles. Finally, a point-merging rule is conducted to merge the points that satisfy a predefined threshold and are supposed to denote the same vehicles. The experimental validation of the proposed method was carried out on six very high-resolution aerial images that illustrate two highways, two shadowed roads, a crowded narrow street, and a street in a dense urban area with crowded parked vehicles. The evaluation of the results shows that our proposed method performed 86% and 83% in overall correctness and completeness, respectively.

  13. A VxD-based automatic blending system using multithreaded programming.

    PubMed

    Wang, L; Jiang, X; Chen, Y; Tan, K C

    2004-01-01

    This paper discusses the object-oriented software design for an automatic blending system. By combining the advantages of a programmable logic controller (PLC) and an industrial control PC (ICPC), an automatic blending control system is developed for a chemical plant. The system structure and multithread-based communication approach are first presented in this paper. The overall software design issues, such as system requirements and functionalities, are then discussed in detail. Furthermore, by replacing the conventional dynamic link library (DLL) with virtual X device drivers (VxD's), a practical and cost-effective solution is provided to improve the robustness of the Windows platform-based automatic blending system in small- and medium-sized plants.

  14. Automatic modeling of pectus excavatum corrective prosthesis using artificial neural networks.

    PubMed

    Rodrigues, Pedro L; Rodrigues, Nuno F; Pinho, A C M; Fonseca, Jaime C; Correia-Pinto, Jorge; Vilaça, João L

    2014-10-01

    Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization.

  15. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types.

  16. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  17. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations

    PubMed Central

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-01-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  18. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  19. Automatic generation of fuzzy rules for the sensor-based navigation of a mobile robot

    SciTech Connect

    Pin, F.G.; Watanabe, Y.

    1994-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called {open_quotes}Fuzzy Behaviorist,{close_quotes} and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed {open_quotes}Fuzzy Behaviorist{close_quotes} approach.

  20. Sensor-based navigation of a mobile robot using automatically constructed fuzzy rules

    SciTech Connect

    Watanabe, Y.; Pin, F.G.

    1993-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called ``Fuzzy Behaviorist,`` and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed ``Fuzzy Behaviorist`` approach.

  1. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  2. Showing Automatically Generated Students' Conceptual Models to Students and Teachers

    ERIC Educational Resources Information Center

    Perez-Marin, Diana; Pascual-Nieto, Ismael

    2010-01-01

    A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…

  3. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  4. Automatic Method of Supernovae Classification by Modeling Human Procedure of Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Módolo, Marcelo; Rosa, Reinaldo; Guimaraes, Lamartine N. F.

    2016-07-01

    The classification of a recently discovered supernova must be done as quickly as possible in order to define what information will be captured and analyzed in the following days. This classification is not trivial and only a few experts astronomers are able to perform it. This paper proposes an automatic method that models the human procedure of classification. It uses Multilayer Perceptron Neural Networks to analyze the supernovae spectra. Experiments were performed using different pre-processing and multiple neural network configurations to identify the classic types of supernovae. Significant results were obtained indicating the viability of using this method in places that have no specialist or that require an automatic analysis.

  5. A Zipfian Model of an Automatic Bibliographic System: An Application to MEDLINE.

    ERIC Educational Resources Information Center

    Fedorowicz, Jane

    1982-01-01

    Derives the underlying structure of the Zipf distribution, with emphasis on its application to word frequencies in the inverted files of automatic bibliographic systems, and applies the Zipfian model to the National Library of Medicine's MEDLINE database. An appendix on the Zipfian mean and 12 references are included. (Author/JL)

  6. FishCam - A semi-automatic video-based monitoring system of fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2016-04-01

    One of the main objectives of the Water Framework Directive is to preserve and restore the continuum of river networks. Regarding vertebrate migration, fish passes are widely used measure to overcome anthropogenic constructions. Functionality of this measure needs to be verified by monitoring. In this study we propose a newly developed monitoring system, named FishCam, to observe fish migration especially in fish passes without contact and without imposing stress on fish. To avoid time and cost consuming field work for fish pass monitoring, this project aims to develop a semi-automatic monitoring system that enables a continuous observation of fish migration. The system consists of a detection tunnel and a high resolution camera, which is mainly based on the technology of security cameras. If changes in the image, e.g. by migrating fish or drifting particles, are detected by a motion sensor, the camera system starts recording and continues until no further motion is detectable. An ongoing key challenge in this project is the development of robust software, which counts, measures and classifies the passing fish. To achieve this goal, many different computer vision tasks and classification steps have to be combined. Moving objects have to be detected and separated from the static part of the image, objects have to be tracked throughout the entire video and fish have to be separated from non-fish objects (e.g. foliage and woody debris, shadows and light reflections). Subsequently, the length of all detected fish needs to be determined and fish should be classified into species. The object classification in fish and non-fish objects is realized through ensembles of state-of-the-art classifiers on a single image per object. The choice of the best image for classification is implemented through a newly developed "fish benchmark" value. This value compares the actual shape of the object with a schematic model of side-specific fish. To enable an automatization of the

  7. Template-based automatic breast segmentation on MRI by excluding the chest region

    SciTech Connect

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  8. An automatic multi-lead electrocardiogram segmentation algorithm based on abrupt change detection.

    PubMed

    Illanes-Manriquez, Alfredo

    2010-01-01

    Automatic detection of electrocardiogram (ECG) waves provides important information for cardiac disease diagnosis. In this paper a new algorithm is proposed for automatic ECG segmentation based on multi-lead ECG processing. Two auxiliary signals are computed from the first and second derivatives of several ECG leads signals. One auxiliary signal is used for R peak detection and the other for ECG waves delimitation. A statistical hypothesis testing is finally applied to one of the auxiliary signals in order to detect abrupt mean changes. Preliminary experimental results show that the detected mean changes instants coincide with the boundaries of the ECG waves.

  9. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  10. Automatic calibration of space based manipulators and mechanisms

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1988-01-01

    Four tasks in manipulator kinematic calibration are summarized. Calibration of a seven degree of freedom manipulator was simulated. A calibration model is presented that can be applied on a closed-loop robot. It is an expansion of open-loop kinematic calibration algorithms subject to constraints. A closed-loop robot with a five-bar linkage transmission was tested. Results show that the algorithm converges within a few iterations. The concept of model differences is formalized. Differences are categorized as structural and numerical, with emphasis on the structural. The work demonstrates that geometric manipulators can be visualized as points in a vector space with the dimension of the space depending solely on the number and type of manipulator joint. Visualizing parameters in a kinematic model as the coordinates locating the manipulator in vector space enables a standard evaluation of the models. Key results include a derivation of the maximum number of parameters necessary for models, a formal discussion on the inclusion of extra parameters, and a method to predetermine a minimum model structure for a kinematic manipulator. A technique is presented that enables single point sensors to gather sufficient information to complete a calibration.

  11. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  12. Automatic background updating for video-based vehicle detection

    NASA Astrophysics Data System (ADS)

    Hu, Chunhai; Li, Dongmei; Liu, Jichuan

    2008-03-01

    Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.

  13. Rheticus: an automatic cloud-based geo-information service platform for territorial monitoring

    NASA Astrophysics Data System (ADS)

    Samarelli, Sergio; Lorusso, Antonio Pio; Agrimano, Luigi; Nutricato, Raffaele; Bovenga, Fabio; Nitti, Davide Oscar; Chiaradia, Maria Teresa

    2016-10-01

    Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and, if appropriate, a minimum interaction with human operators. In this paper, we outlines the capabilities of the "Rheticus® Displacement" service, designed for geohazard and infrastructure monitoring through Multi-Temporal SAR Interferometry techniques.

  14. An Automatic Document Indexing System Based on Cooperating Expert Systems: Design and Development.

    ERIC Educational Resources Information Center

    Schuegraf, Ernst J.; van Bommel, Martin F.

    1993-01-01

    Describes the design of an automatic indexing system that is based on statistical techniques and expert system technology. Highlights include system architecture; the derivation of topic indicators, including word frequency; experimental results using documents from ERIC; the effects of stemming; and the identification of characteristic…

  15. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  16. Evaluating Automatic Speech Recognition-Based Language Learning Systems: A Case Study

    ERIC Educational Resources Information Center

    van Doremalen, Joost; Boves, Lou; Colpaert, Jozef; Cucchiarini, Catia; Strik, Helmer

    2016-01-01

    The purpose of this research was to evaluate a prototype of an automatic speech recognition (ASR)-based language learning system that provides feedback on different aspects of speaking performance (pronunciation, morphology and syntax) to students of Dutch as a second language. We carried out usability reviews, expert reviews and user tests to…

  17. Applications of Neural Network Models in Automatic Speech Recognition.

    DTIC Science & Technology

    1986-09-29

    computing elements that follow the principles of physiological neurons, called neural network models, have been shown to have the capability of learning...to recognize patterns and to retrieve complete patterns from partial representations. The implementation of neural network models as VLSI or ULSI chips...tion of millions, if not billions, of these computing ele- ments. Until recently, this was a practical impossibility. But great advances in VLSI

  18. Engineering model of the electric drives of separation device for simulation of automatic control systems of reactive power compensation by means of serially connected capacitors

    NASA Astrophysics Data System (ADS)

    Juromskiy, V. M.

    2016-09-01

    It is developed a mathematical model for an electric drive of high-speed separation device in terms of the modeling dynamic systems Simulink, MATLAB. The model is focused on the study of the automatic control systems of the power factor (Cosφ) of an actuator by compensating the reactive component of the total power by switching a capacitor bank in series with the actuator. The model is based on the methodology of the structural modeling of dynamic processes.

  19. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  20. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  1. Automatic Calibration Method for a Storm Water Runoff Model

    NASA Astrophysics Data System (ADS)

    Barco, J.; Wong, K. M.; Hogue, T.; Stenstrom, M. K.

    2007-12-01

    Major metropolitan areas are characterized by continuous increases in imperviousness due to urban development. Increasing imperviousness increases runoff volume and maximum rates of runoff, with generally negative consequences for natural systems. To avoid environmental degradation, new development standards often prohibit increases in total runoff volume and may limit maximum flow rates. Methods to reduce runoff volume and maximum runoff rate are required, and solutions to the problems may benefit from the use of advanced models. In this study the U.S. Storm Water Management Model (SWMM) was adapted and calibrated to the Ballona Creek watershed, a large urban catchment in Southern California. A geographic information system (GIS) was used to process the input data and generate the spatial distribution of precipitation. An optimization procedure using the Complex Method was incorporated to estimate runoff parameters, and ten storms were used for calibration and validation. The calibrated model predicted the observed outputs with reasonable accuracy. A sensitivity analysis showed the impact of the model parameters, and results were most sensitive to imperviousness and impervious depression storage and least sensitive to Manning roughness for surface flow. Optimized imperviousness was greater than imperviousness predicted from landuse information. The results demonstrate that this methodology of integrating GIS and stormwater model with a constrained optimization technique can be applied to large watersheds, and can be a useful tool to evaluate alternative strategies to reduce runoff rate and volume.

  2. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    NASA Astrophysics Data System (ADS)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily

  3. The Acoustic-Modeling Problem in Automatic Speech Recognition.

    DTIC Science & Technology

    1987-12-01

    systems that use an artificial grammar do so in order to set this uncertainty by fiat, thereby ensuring that their task, will not be too difficult...an artificial grammar , the Pr (W = w)’s are known and Hm (W) can, in fact, achieve its lower bound if the system simply uses these probabilities. In a...finite-state grammar represented by that chain. As Jim Baker points out, the modeling of speech by a hidden Markov model should not be regarded as a

  4. An automatic damage detection algorithm based on the Short Time Impulse Response Function

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Carlo Ponzo, Felice; Ditommaso, Rocco; Iacovino, Chiara

    2016-04-01

    also during the strong motion phase. This approach helps to overcome the limitation derived from the use of techniques based on simple Fourier Transform that provide good results when the response of the monitored system is stationary, but fails when the system exhibits a non-stationary behaviour. The main advantage derived from the use of the proposed approach for Structural Health Monitoring is based on the simplicity of the interpretation of the nonlinear variations of the fundamental frequency. The proposed methodology has been tested on numerical models of reinforced concrete structures designed for only gravity loads without and with the presence of infill panels. In order to verify the effectiveness of the proposed approach for the automatic evaluation of the fundamental frequency over time, the results of an experimental campaign of shaking table tests conducted at the seismic laboratory of University of Basilicata (SISLAB) have been used. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''. References Ditommaso, R., Mucciarelli, M., Ponzo, F.C. (2012) Analysis of non-stationary structural systems by using a band-variable filter. Bulletin of Earthquake Engineering. DOI: 10.1007/s10518-012-9338-y.

  5. Automatic age and gender classification using supervised appearance model

    NASA Astrophysics Data System (ADS)

    Bukar, Ali Maina; Ugail, Hassan; Connah, David

    2016-11-01

    Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.

  6. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    ERIC Educational Resources Information Center

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  7. A new approach for automatic sleep scoring: Combining Taguchi based complex-valued neural network and complex wavelet transform.

    PubMed

    Peker, Musa

    2016-06-01

    Automatic classification of sleep stages is one of the most important methods used for diagnostic procedures in psychiatry and neurology. This method, which has been developed by sleep specialists, is a time-consuming and difficult process. Generally, electroencephalogram (EEG) signals are used in sleep scoring. In this study, a new complex classifier-based approach is presented for automatic sleep scoring using EEG signals. In this context, complex-valued methods were utilized in the feature selection and classification stages. In the feature selection stage, features of EEG data were extracted with the help of a dual tree complex wavelet transform (DTCWT). In the next phase, five statistical features were obtained. These features are classified using complex-valued neural network (CVANN) algorithm. The Taguchi method was used in order to determine the effective parameter values in this CVANN. The aim was to develop a stable model involving parameter optimization. Different statistical parameters were utilized in the evaluation phase. Also, results were obtained in terms of two different sleep standards. In the study in which a 2nd level DTCWT and CVANN hybrid model was used, 93.84% accuracy rate was obtained according to the Rechtschaffen & Kales (R&K) standard, while a 95.42% accuracy rate was obtained according to the American Academy of Sleep Medicine (AASM) standard. Complex-valued classifiers were found to be promising in terms of the automatic sleep scoring and EEG data.

  8. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  9. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  10. Automatic Adjustment of Wide-Base Google Street View Panoramas

    NASA Astrophysics Data System (ADS)

    Boussias-Alexakis, E.; Tsironisa, V.; Petsa, E.; Karras, G.

    2016-06-01

    This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  11. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %.

  12. Automatic detection of volcano-seismic events by modeling state and event duration in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Bhatti, Sohail Masood; Khan, Muhammad Salman; Wuth, Jorge; Huenupan, Fernando; Curilem, Millaray; Franco, Luis; Yoma, Nestor Becerra

    2016-09-01

    In this paper we propose an automatic volcano event detection system based on Hidden Markov Model (HMM) with state and event duration models. Since different volcanic events have different durations, therefore the state and whole event durations learnt from the training data are enforced on the corresponding state and event duration models within the HMM. Seismic signals from the Llaima volcano are used to train the system. Two types of events are employed in this study, Long Period (LP) and Volcano-Tectonic (VT). Experiments show that the standard HMMs can detect the volcano events with high accuracy but generates false positives. The results presented in this paper show that the incorporation of duration modeling can lead to reductions in false positive rate in event detection as high as 31% with a true positive accuracy equal to 94%. Further evaluation of the false positives indicate that the false alarms generated by the system were mostly potential events based on the signal-to-noise ratio criteria recommended by a volcano expert.

  13. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  14. Validation of the Operating and Support Cost Model for Avionics Automatic Test Equipment (OSCATE).

    DTIC Science & Technology

    1980-06-01

    are essentially deter- mined during the initial design phases. This further em- phasises the need for consideration of the impact of design decisions...program manager can use in evaluation of the cost impact of available alternatives. The Operating and Support Cost Model for Automatic Test Equipment...ummmmm m m -"m - CHAPTER III METHODOLOGY The OSCATE model has been developed for use by pro- gram managers to evaluate the impact of ATE system para

  15. Action levels for automatic gamma-measurements based on probabilistic radionuclide transport calculations.

    PubMed

    Lauritzen, Bent; Hedemann-Jensen, Per

    2005-12-01

    In the event of a nuclear or radiological emergency resulting in an atmospheric release of radioactive materials, stationary gamma-measurements, for example obtained from distributed, automatic monitoring stations, may provide a first assessment of exposures resulting from airborne and deposited activity. Decisions on the introduction of countermeasures for the protection of the public can be based on such off-site gamma measurements. A methodology is presented for calculation of gamma-radiation action levels for the introduction of specific countermeasures, based on probabilistic modelling of the dispersion of radionuclides and the radiation exposure. The methodology is applied to a nuclear accident situation with long-range atmospheric dispersion of radionuclides, and action levels of dose rate measured by a network of monitoring stations are estimated for sheltering and foodstuff restrictions. It is concluded that the methodology is applicable to all emergency countermeasures following a nuclear accident but measurable quantities other than ambient dose equivalent rate are needed for decisions on the introduction of foodstuff countermeasures.

  16. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  17. Automatic construction of rule-based ICD-9-CM coding systems

    PubMed Central

    Farkas, Richárd; Szarvas, György

    2008-01-01

    Background In this paper we focus on the problem of automatically constructing ICD-9-CM coding systems for radiology reports. ICD-9-CM codes are used for billing purposes by health institutes and are assigned to clinical records manually following clinical treatment. Since this labeling task requires expert knowledge in the field of medicine, the process itself is costly and is prone to errors as human annotators have to consider thousands of possible codes when assigning the right ICD-9-CM labels to a document. In this study we use the datasets made available for training and testing automated ICD-9-CM coding systems by the organisers of an International Challenge on Classifying Clinical Free Text Using Natural Language Processing in spring 2007. The challenge itself was dominated by entirely or partly rule-based systems that solve the coding task using a set of hand crafted expert rules. Since the feasibility of the construction of such systems for thousands of ICD codes is indeed questionable, we decided to examine the problem of automatically constructing similar rule sets that turned out to achieve a remarkable accuracy in the shared task challenge. Results Our results are very promising in the sense that we managed to achieve comparable results with purely hand-crafted ICD-9-CM classifiers. Our best model got a 90.26% F measure on the training dataset and an 88.93% F measure on the challenge test dataset, using the micro-averaged Fβ=1 measure, the official evaluation metric of the International Challenge on Classifying Clinical Free Text Using Natural Language Processing. This result would have placed second in the challenge, with a hand-crafted system achieving slightly better results. Conclusions Our results demonstrate that hand-crafted systems – which proved to be successful in ICD-9-CM coding – can be reproduced by replacing several laborious steps in their construction with machine learning models. These hybrid systems preserve the favourable

  18. Assessing Automatic Aid as an Emergency Response Model

    DTIC Science & Technology

    2013-12-01

    children, Nicholas, Spencer , Kayla, and Krista: I appreciate your patience and understanding. I hope that I am a role model to you, proving that you...noted by all interview subjects and provides for the closest resource “without regard to the name on the door” as noted by Battalion Chief Matt Herbert ...states that he “expects to go to Alexandria or Fairfax everyday,”96 and Herbert expands on the close interaction by noting, “crews have dinner, drill

  19. Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, ACM; Fonseca, Jaime C.; Correia-Pinto, Jorge; Vilaça, João. L.

    2014-03-01

    Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82+/-5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7+/-4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

  20. A multi-algorithm-based automatic person identification system

    NASA Astrophysics Data System (ADS)

    Monwar, Md. Maruf; Gavrilova, Marina

    2010-04-01

    Multimodal biometric is an emerging area of research that aims at increasing the reliability of biometric systems through utilizing more than one biometric in decision-making process. In this work, we develop a multi-algorithm based multimodal biometric system utilizing face and ear features and rank and decision fusion approach. We use multilayer perceptron network and fisherimage approaches for individual face and ear recognition. After face and ear recognition, we integrate the results of the two face matchers using rank level fusion approach. We experiment with highest rank method, Borda count method, logistic regression method and Markov chain method of rank level fusion approach. Due to the better recognition performance we employ Markov chain approach to combine face decisions. Similarly, we get combined ear decision. These two decisions are combined for final identification decision. We try with 'AND'/'OR' rule, majority voting rule and weighted majority voting rule of decision fusion approach. From the experiment results, we observed that weighted majority voting rule works better than any other decision fusion approaches and hence, we incorporate this fusion approach for the final identification decision. The final results indicate that using multi algorithm based can certainly improve the recognition performance of multibiometric systems.

  1. Automatic generation of computable implementation guides from clinical information models.

    PubMed

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes.

  2. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser

  3. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  4. Automatic kernel regression modelling using combined leave-one-out test score and regularised orthogonal least squares.

    PubMed

    Hong, X; Chen, S; Sharkey, P M

    2004-02-01

    This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out test score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised orthogonal least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model optimal generalisation structure selection. The major contributions are to derive the PRESS error in a regularised orthogonal weight model, develop an efficient recursive computation formula for PRESS errors in the regularised orthogonal least squares forward regression framework and hence construct a model with a good generalisation property. Based on the properties of the PRESS statistic the proposed algorithm can achieve a fully automated model construction procedure without resort to any other validation data set for model evaluation.

  5. Automatic generation of water distribution systems based on GIS data.

    PubMed

    Sitzenfrei, Robert; Möderl, Michael; Rauch, Wolfgang

    2013-09-01

    In the field of water distribution system (WDS) analysis, case study research is needed for testing or benchmarking optimisation strategies and newly developed software. However, data availability for the investigation of real cases is limited due to time and cost needed for data collection and model setup. We present a new algorithm that addresses this problem by generating WDSs from GIS using population density, housing density and elevation as input data. We show that the resulting WDSs are comparable to actual systems in terms of network properties and hydraulic performance. For example, comparing the pressure heads for an actual and a generated WDS results in pressure head differences of ±4 m or less for 75% of the supply area. Although elements like valves and pumps are not included, the new methodology can provide water distribution systems of varying levels of complexity (e.g., network layouts, connectivity, etc.) to allow testing design/optimisation algorithms on a large number of networks. The new approach can be used to estimate the construction costs of planned WDSs aimed at addressing population growth or at comparisons of different expansion strategies in growth corridors.

  6. Automatic exudate detection using active contour model and regionwise classification.

    PubMed

    Harangi, B; Lazar, I; Hajdu, A

    2012-01-01

    Diabetic retinopathy is one the most common cause of blindness in the world. Exudates are among the early signs of this disease, so its proper detection is a very important task to prevent consequent effects. In this paper, we propose a novel approach for exudate detection. First, we identify possible regions containing exudates using grayscale morphology. Then, we apply an active contour based method to minimize the Chan-Vese energy to extract accurate borders of the candidates. To remove those false candidates that have sufficient strong borders to pass the active contour method we use a regionwise classifier. Hence, we extract several shape features for each candidate and let a boosted Naïve Bayes classifier eliminate the false candidates. We considered the publicly available DiaretDB1 color fundus image set for testing, where the proposed method outperformed several state-of-the-art exudate detectors.

  7. Towards an Automatic and Application-Based EigensolverSelection

    SciTech Connect

    Zhang, Yeliang; Li, Xiaoye S.; Marques, Osni

    2005-09-09

    The computation of eigenvalues and eigenvectors is an important and often time-consuming phase in computer simulations. Recent efforts in the development of eigensolver libraries have given users good algorithms without the need for users to spend much time in programming. Yet, given the variety of numerical algorithms that are available to domain scientists, choosing the ''best'' algorithm suited for a particular application is a daunting task. As simulations become increasingly sophisticated and larger, it becomes infeasible for a user to try out every reasonable algorithm configuration in a timely fashion. Therefore, there is a need for an intelligent engine that can guide the user through the maze of various solvers with various configurations. In this paper, we present a methodology and a software architecture aiming at determining the best solver based on the application type and the matrix properties. We combine a decision tree and an intelligent engine to select a solver and a preconditioner combination for the application submitted by the user. We also discuss how our system interface is implemented with third party numerical libraries. In the case study, we demonstrate the feasibility and usefulness of our system with a simplified linear solving system. Our experiments show that our proposed intelligent engine is quite adept in choosing a suitable algorithm for different applications.

  8. Integrating spatial altimetry data into the automatic calibration of hydrological models

    NASA Astrophysics Data System (ADS)

    Getirana, Augusto C. V.

    2010-06-01

    SummaryThe automatic calibration of hydrological models has traditionally been performed using gauged data. However, inaccessibility to remote areas and lack of financial support cause data to be lacking in large tropical basins, such as the Amazon basin. Advances in the acquisition, processing and availability of spatially distributed remotely sensed data move the evaluation of computational models easier and more practical. This paper presents the pioneering integration of spatial altimetry data into the automatic calibration of a hydrological model. The study area is the Branco River basin, located in the Northern Amazon basin. An empirical stage × discharge relation is obtained for the Negro River and transposed to the Branco River, which enables the correlation of spatial altimetry data with water discharge derived from the MGB-IPH hydrological model. Six scenarios are created combining two sets of objective functions with three different datasets. Two of them are composed of ENVISAT altimetric data, and the third one is derived from daily gauged discharges. The MOCOM-UA multi-criteria global optimization algorithm is used to optimize the model parameters. The calibration process is validated with gauged discharge at three gauge stations located along the Branco River and two tributaries. Results demonstrate that the combination of virtual stations along the river can provide reasonable parameters. Further, the considerably reduced number of observations provided by the satellite is not a restriction to the automatic calibration, deriving performance coefficients similar to those obtained with the process using daily gauged data.

  9. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  10. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  11. Performing label-fusion-based segmentation using multiple automatically generated templates.

    PubMed

    Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P

    2013-10-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively).

  12. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model

    NASA Astrophysics Data System (ADS)

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-06-01

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  13. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model.

    PubMed

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-06-21

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  14. Lung Lesion Extraction Using a Toboggan Based Growing Automatic Segmentation Approach.

    PubMed

    Song, Jiangdian; Yang, Caiyun; Fan, Li; Wang, Kun; Yang, Feng; Liu, Shiyuan; Tian, Jie

    2016-01-01

    The accurate segmentation of lung lesions from computed tomography (CT) scans is important for lung cancer research and can offer valuable information for clinical diagnosis and treatment. However, it is challenging to achieve a fully automatic lesion detection and segmentation with acceptable accuracy due to the heterogeneity of lung lesions. Here, we propose a novel toboggan based growing automatic segmentation approach (TBGA) with a three-step framework, which are automatic initial seed point selection, multi-constraints 3D lesion extraction and the final lesion refinement. The new approach does not require any human interaction or training dataset for lesion detection, yet it can provide a high lesion detection sensitivity (96.35%) and a comparable segmentation accuracy with manual segmentation (P > 0.05), which was proved by a series assessments using the LIDC-IDRI dataset (850 lesions) and in-house clinical dataset (121 lesions). We also compared TBGA with commonly used level set and skeleton graph cut methods, respectively. The results indicated a significant improvement of segmentation accuracy . Furthermore, the average time consumption for one lesion segmentation was under 8 s using our new method. In conclusion, we believe that the novel TBGA can achieve robust, efficient and accurate lung lesion segmentation in CT images automatically.

  15. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  16. Bond graph modeling, simulation, and reflex control of the Mars planetary automatic vehicle

    NASA Astrophysics Data System (ADS)

    Amara, Maher; Friconneau, Jean Pierre; Micaelli, Alain

    1993-01-01

    The bond graph modeling, simulation, and reflex control study of the Planetary Automatic Vehicle are considered. A simulator derived from a complete bond graph model of the vehicle is presented. This model includes both knowledge and representation models of the mechanical structure, the floor contact, and the Mars site. The MACSYMEN (French acronym for aided design method of multi-energetic systems) is used and applied to study the input-output power transfers. The reflex control is then considered. Controller architecture and locomotion specificity are described. A numerical stage highlights some interesting results of the robot and the controller capabilities.

  17. Automatic reconstruction of physiological gestures used in a model of birdsong production

    PubMed Central

    Boari, Santiago; Perl, Yonatan Sanz; Margoliash, Daniel; Mindlin, Gabriel B.

    2015-01-01

    Highly coordinated learned behaviors are key to understanding neural processes integrating the body and the environment. Birdsong production is a widely studied example of such behavior in which numerous thoracic muscles control respiratory inspiration and expiration: the muscles of the syrinx control syringeal membrane tension, while upper vocal tract morphology controls resonances that modulate the vocal system output. All these muscles have to be coordinated in precise sequences to generate the elaborate vocalizations that characterize an individual's song. Previously we used a low-dimensional description of the biomechanics of birdsong production to investigate the associated neural codes, an approach that complements traditional spectrographic analysis. The prior study used algorithmic yet manual procedures to model singing behavior. In the present work, we present an automatic procedure to extract low-dimensional motor gestures that could predict vocal behavior. We recorded zebra finch songs and generated synthetic copies automatically, using a biomechanical model for the vocal apparatus and vocal tract. This dynamical model described song as a sequence of physiological parameters the birds control during singing. To validate this procedure, we recorded electrophysiological activity of the telencephalic nucleus HVC. HVC neurons were highly selective to the auditory presentation of the bird's own song (BOS) and gave similar selective responses to the automatically generated synthetic model of song (AUTO). Our results demonstrate meaningful dimensionality reduction in terms of physiological parameters that individual birds could actually control. Furthermore, this methodology can be extended to other vocal systems to study fine motor control. PMID:26378204

  18. A robust automatic birdsong phrase classification: A template-based approach.

    PubMed

    Kaewtip, Kantapon; Alwan, Abeer; O'Reilly, Colm; Taylor, Charles E

    2016-11-01

    Automatic phrase detection systems of bird sounds are useful in several applications as they reduce the need for manual annotations. However, birdphrase detection is challenging due to limited training data and background noise. Limited data occur because of limited recordings or the existence of rare phrases. Background noise interference occurs because of the intrinsic nature of the recording environment such as wind or other animals. This paper presents a different approach to birdsong phrase classification using template-based techniques suitable even for limited training data and noisy environments. The algorithm utilizes dynamic time-warping (DTW) and prominent (high-energy) time-frequency regions of training spectrograms to derive templates. The performance of the proposed algorithm is compared with the traditional DTW and hidden Markov models (HMMs) methods under several training and test conditions. DTW works well when the data are limited, while HMMs do better when more data are available, yet they both suffer when the background noise is severe. The proposed algorithm outperforms DTW and HMMs in most training and testing conditions, usually with a high margin when the background noise level is high. The innovation of this work is that the proposed algorithm is robust to both limited training data and background noise.

  19. Double-channel on-line automatic fruit grading system based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Junxiong; Xun, Yi; Li, Wei; Zhang, Cong

    2007-01-01

    The technology of fruit grading based on computer vision was studied and a double-channel on-line automatic grading system was built. The process of grading included fruit image acquiring, image processing and fruit tracking and separating. In the first section, a new approach of image grabbing by employing an asynchronous reset camera was presented. Three images of the different surfaces of each fruit would be collected by rolling the fruits when they passed through the image-capturing area. To acquire clear images, high-frequency fluorescent lamps supplied by three-phase alternating current were used to illuminate. In the image processing section, the diameter and a color model were used to identify the grade of the fruits. Fruits were graded into four grades by size, and two by color. Each fruit identified was tracked and separated by a novel algorithm which was realized with a PLC (Program Logic Controller). The whole grading system was tested with 1000 citrus. It could work stably when the grading capability was twelve citrus per second and the grading level was nine. The on-line grading results indicated that the accuracy of tracking and separating was higher than 99%, and the ultimate grading error was less than 3%.

  20. An automatic enzyme immunoassay based on a chemiluminescent lateral flow immunosensor.

    PubMed

    Joung, Hyou-Arm; Oh, Young Kyoung; Kim, Min-Gon

    2014-03-15

    Microfluidic integrated enzyme immunosorbent assay (EIA) sensors are efficient systems for point-of-care testing (POCT). However, such systems are not only relatively expensive but also require a complicated manufacturing process. Therefore, additional fluidic control systems are required for the implementation of EIAs in a lateral flow immunosensor (LFI) strip sensor. In this study, we describe a novel LFI for EIA, the use of which does not require additional steps such as mechanical fluidic control, washing, or injecting. The key concept relies on a delayed-release effect of chemiluminescence substrates (luminol enhancer and hydrogen peroxide generator) by an asymmetric polysulfone membrane (ASPM). When the ASPM was placed between the nitrocellulose (NC) membrane and the substrate pad, substrates encapsulated in the substrate pad were released after 5.3 ± 0.3 min. Using this delayed-release effect, we designed and implemented the chemiluminescent LFI-based automatic EIA system, which sequentially performed the immunoreaction, pH change, substrate release, hydrogen peroxide generation, and chemiluminescent reaction with only 1 sample injection. In a model study, implementation of the sensor was validated by measuring the high sensitivity C-reactive protein (hs-CRP) level in human serum.

  1. An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts

    PubMed Central

    Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S.; Mangin, Jean-Francois; Seong, Joon-Kyung

    2015-01-01

    We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively. PMID:26225419

  2. Implement the RFID position based system of automatic tablets packaging machine for patient safety.

    PubMed

    Chang, Ching-Hsiang; Lai, Yeong-Lin; Chen, Chih-Cheng

    2012-12-01

    Patient safety has been regarded as the most important quality policy of hospital management. The medicine dispensing definitely plays an influential role in the Joint Commission International Accreditation Standards. The problem we are going to discuss in this paper is that the function of detecting mistakes does not exist in the Automatic Tablets packaging machine (ATPM) in the hospital pharmacy department when the pharmacists implement the replenishment of cassettes. In this situation, there are higher possibilities of placing the wrong cassettes back to the wrong positions, so that the human errors will lead to a crucial impact on total inpatients undoubtedly. Therefore, this study aims to design the RFID (Radio frequency identification) position based system (PBS) for the ATPM with passive high frequency (HF) model. At first, we placed the HF tags on each cassette and installed the HF readers on the cabinets for each position. Then, the system works on the reading loop to verify ID numbers and positions on each cassette. Next, the system would detect whether the orbit opens or not and controls the readers' working power consumption for drug storage temperature. Finally, we use the RFID PBS of the ATPM to achieve the goal of avoiding the medication errors at any time for patient safety.

  3. An automatic framework for quantitative validation of voxel based morphometry measures of anatomical brain asymmetry.

    PubMed

    Pepe, Antonietta; Dinov, Ivo; Tohka, Jussi

    2014-10-15

    The study of anatomical brain asymmetries has been a topic of great interest in the neuroimaging community in the past decades. However, the accuracy of brain asymmetry measurements has been rarely investigated. In this study, we propose a fully automatic methodology for the quantitative validation of brain tissue asymmetries as measured by Voxel Based Morphometry (VBM) from structural magnetic resonance (MR) images. Starting from a real MR image, the methodology generates simulated 3D MR images with a known and realistic pattern of inter-hemispheric asymmetry that models the left-occipital right-frontal petalia of a normal brain and the related rightward bending of the inter-hemispheric fissure. As an example, we generated a dataset of 64 simulated MR images and applied this dataset for the quantitative validation of optimized VBM measures of asymmetries in brain tissue composition. Our results suggested that VBM analysis strongly depended on the spatial normalization of the individual brain images, the selected template space, and the amount of spatial smoothing applied. The most accurate asymmetry detections were achieved by 9-degrees of freedom registration to the symmetrical template space with 4 to 8mm spatial smoothing.

  4. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    NASA Astrophysics Data System (ADS)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  5. Automatic face detection and tracking based on Adaboost with camshift algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Long, JianFeng

    2011-10-01

    With the development of information technology, video surveillance is widely used in security monitoring and identity recognition. For most of pure face tracking algorithms are hard to specify the initial location and scale of face automatically, this paper proposes a fast and robust method to detect and track face by combining adaboost with camshift algorithm. At first, the location and scale of face is specified by adaboost algorithm based on Haar-like features and it will be conveyed to the initial search window automatically. Then, we apply camshift algorithm to track face. The experimental results based on OpenCV software yield good results, even in some special circumstances, such as light changing and face rapid movement. Besides, by drawing out the tracking trajectory of face movement, some abnormal behavior events can be analyzed.

  6. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light.

    PubMed

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-28

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm(-2) and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications.

  7. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  8. Automatic Three-Dimensional Measurement of Large-Scale Structure Based on Vision Metrology

    PubMed Central

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods. PMID:24701143

  9. Automatic diet monitoring: a review of computer vision and wearable sensor-based methods.

    PubMed

    Hassannejad, Hamid; Matrella, Guido; Ciampolini, Paolo; De Munari, Ilaria; Mordonini, Monica; Cagnoni, Stefano

    2017-01-31

    Food intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal.

  10. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light

    NASA Astrophysics Data System (ADS)

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-01

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications.Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00759g

  11. Combination of automatic non-rigid and landmark based registration: the best of both worlds

    NASA Astrophysics Data System (ADS)

    Fischer, Bernd; Modersitzki, Jan

    2003-05-01

    Automatic, parameter-free, and non-rigid registration schemes are known to be valuable tools in various (medical) image processing applications. Typically, these approaches aim to match intensity patterns in each scan by minimizing an appropriate distance measure. The outcome of an automatic registration procedure in general matches the target image quite good on the average. However, it may be inaccurate for specific, important locations as for example anatomical landmarks. On the other hand, landmark based registration techniques are designed to accurately match user specified landmarks. A drawback of landmark based registration is that the intensities of the images are completely neglected. Consequently, the registration result away from the landmarks may be very poor. Here we propose a framework for novel registration techniques which are capable to combine automatic and landmark driven approaches in order to benefit from the advantages of both strategies. We also propose a general, mathematical treatment of this framework and a particular implementation. The procedure computes a displacement field which is guaranteed to produce a one-to-one match between given landmarks and at the smae time minimizes an intensity based measure for the remaining parts of the images. The properties of the new scheme are demonstrated for a variety of numerical example. It is worthwhile noticing, that we not only present a new approach. Instead, we propose a general framework for a variety of different approaches. The choice of the main building blocks, the distance measure and the smoothness constraint, is essentially free.

  12. [A wavelet-transform-based method for the automatic detection of late-type stars].

    PubMed

    Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao

    2005-07-01

    The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.

  13. Automaticity in acute ischemia: Bifurcation analysis of a human ventricular model

    NASA Astrophysics Data System (ADS)

    Bouchard, Sylvain; Jacquemet, Vincent; Vinet, Alain

    2011-01-01

    Acute ischemia (restriction in blood supply to part of the heart as a result of myocardial infarction) induces major changes in the electrophysiological properties of the ventricular tissue. Extracellular potassium concentration ([Ko+]) increases in the ischemic zone, leading to an elevation of the resting membrane potential that creates an “injury current” (IS) between the infarcted and the healthy zone. In addition, the lack of oxygen impairs the metabolic activity of the myocytes and decreases ATP production, thereby affecting ATP-sensitive potassium channels (IKatp). Frequent complications of myocardial infarction are tachycardia, fibrillation, and sudden cardiac death, but the mechanisms underlying their initiation are still debated. One hypothesis is that these arrhythmias may be triggered by abnormal automaticity. We investigated the effect of ischemia on myocyte automaticity by performing a comprehensive bifurcation analysis (fixed points, cycles, and their stability) of a human ventricular myocyte model [K. H. W. J. ten Tusscher and A. V. Panfilov, Am. J. Physiol. Heart Circ. Physiol.AJPHAP0363-613510.1152/ajpheart.00109.2006 291, H1088 (2006)] as a function of three ischemia-relevant parameters [Ko+], IS, and IKatp. In this single-cell model, we found that automatic activity was possible only in the presence of an injury current. Changes in [Ko+] and IKatp significantly altered the bifurcation structure of IS, including the occurrence of early-after depolarization. The results provide a sound basis for studying higher-dimensional tissue structures representing an ischemic heart.

  14. Automatic parametrization of non-polar implicit solvent models for the blind prediction of solvation free energies

    NASA Astrophysics Data System (ADS)

    Wang, Bao; Zhao, Zhixiong; Wei, Guo-Wei

    2016-09-01

    In this work, a systematic protocol is proposed to automatically parametrize the non-polar part of implicit solvent models with polar and non-polar components. The proposed protocol utilizes either the classical Poisson model or the Kohn-Sham density functional theory based polarizable Poisson model for modeling polar solvation free energies. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the polar component. For the non-polar component, either the standard model of surface area, molecular volume, and van der Waals interactions or a model with atomic surface areas and molecular volume is employed. To automatically parametrize a non-polar model, we develop scoring and ranking algorithms to classify solute molecules. The their non-polar parametrization is obtained based on the assumption that similar molecules have similar parametrizations. A large database with 668 experimental data is collected and employed to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33 kcal/mol. Additionally, five subsets of the database, i.e., SAMPL0-SAMPL4, are employed to further demonstrate that the proposed protocol. The optimal RMS errors are 0.93, 2.82, 1.90, 0.78, and 1.03 kcal/mol, respectively, for SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 test sets. The corresponding RMS errors for the polarizable Poisson model with the Amber Bondi radii are 0.93, 2.89, 1.90, 1.16, and 1.07 kcal/mol, respectively.

  15. Automatic construction of subject-specific human airway geometry including trifurcations based on a CT-segmented airway skeleton and surface.

    PubMed

    Miyawaki, Shinjiro; Tawhai, Merryn H; Hoffman, Eric A; Wenzel, Sally E; Lin, Ching-Long

    2017-04-01

    We propose a method to construct three-dimensional airway geometric models based on airway skeletons, or centerlines (CLs). Given a CT-segmented airway skeleton and surface, the proposed CL-based method automatically constructs subject-specific models that contain anatomical information regarding branches, include bifurcations and trifurcations, and extend from the trachea to terminal bronchioles. The resulting model can be anatomically realistic with the assistance of an image-based surface; alternatively a model with an idealized skeleton and/or branch diameters is also possible. This method systematically identifies and classifies trifurcations to successfully construct the models, which also provides the number and type of trifurcations for the analysis of the airways from an anatomical point of view. We applied this method to 16 normal and 16 severe asthmatic subjects using their computed tomography images. The average distance between the surface of the model and the image-based surface was 11 % of the average voxel size of the image. The four most frequent locations of trifurcations were the left upper division bronchus, left lower lobar bronchus, right upper lobar bronchus, and right intermediate bronchus. The proposed method automatically constructed accurate subject-specific three-dimensional airway geometric models that contain anatomical information regarding branches using airway skeleton, diameters, and image-based surface geometry. The proposed method can construct (i) geometry automatically for population-based studies, (ii) trifurcations to retain the original airway topology, (iii) geometry that can be used for automatic generation of computational fluid dynamics meshes, and (iv) geometry based only on a skeleton and diameters for idealized branches.

  16. Automatic stress-relieving music recommendation system based on photoplethysmography-derived heart rate variability analysis.

    PubMed

    Shin, Il-Hyung; Cha, Jaepyeong; Cheon, Gyeong Woo; Lee, Choonghee; Lee, Seung Yup; Yoon, Hyung-Jin; Kim, Hee Chan

    2014-01-01

    This paper presents an automatic stress-relieving music recommendation system (ASMRS) for individual music listeners. The ASMRS uses a portable, wireless photoplethysmography module with a finger-type sensor, and a program that translates heartbeat signals from the sensor to the stress index. The sympathovagal balance index (SVI) was calculated from heart rate variability to assess the user's stress levels while listening to music. Twenty-two healthy volunteers participated in the experiment. The results have shown that the participants' SVI values are highly correlated with their prespecified music preferences. The sensitivity and specificity of the favorable music classification also improved as the number of music repetitions increased to 20 times. Based on the SVI values, the system automatically recommends favorable music lists to relieve stress for individuals.

  17. Note: automatic laser-to-optical-fiber coupling system based on monitoring of Raman scattering signal.

    PubMed

    Park, Kyoung-Duck; Kim, Yong Hwan; Park, Jin-Ho; Yim, Sang-Youp; Jeong, Mun Seok

    2012-09-01

    We developed an automatic laser-to-optical-fiber coupling (ALOC) system that is based on the difference in the Raman scattering signals of the core and cladding of the optical fiber. This system can be easily applied to all fields of fiber optics since it can perform automatic optical coupling within a few seconds regardless of the core size or the condition of the output end of the optical fiber. The coupling time for a commercial single-mode fiber for a wavelength of 632.8 nm (core diameter: 9 μm, cladding diameter: 125 μm) is ~1.5 s. The ALOC system was successfully applied to single-mode-fiber Raman endoscopy for the measurement of the Raman spectrum of carbon nanotubes.

  18. [Automatic segmentation of clustered breast cancer cells based on modified watershed algorithm and concavity points searching].

    PubMed

    Tong, Zhen; Pu, Lixin; Dong, Fangjie

    2013-08-01

    As a common malignant tumor, breast cancer has seriously affected women's physical and psychological health even threatened their lives. Breast cancer has even begun to show a gradual trend of high incidence in some places in the world. As a kind of common pathological assist diagnosis technique, immunohistochemical technique plays an important role in the diagnosis of breast cancer. Usually, Pathologists isolate positive cells from the stained specimen which were processed by immunohistochemical technique and calculate the ratio of positive cells which is a core indicator of breast cancer in diagnosis. In this paper, we present a new algorithm which was based on modified watershed algorithm and concavity points searching to identify the positive cells and segment the clustered cells automatically, and then realize automatic counting. By comparison of the results of our experiments with those of other methods, our method can exactly segment the clustered cells without losing any geometrical cell features and give the exact number of separating cells.

  19. [Method of automatic detection of brain lesion based on wavelet feature vector].

    PubMed

    Fan, Ya; Liu, Wei; Feng, Huanqing

    2011-06-01

    A new method of automatic detection of brain lesion based on wavelet feature vector of CT images has been proposed in the present paper. Firstly, we created training samples by manually segmenting normal CT images into gray matter, white matter and cerebrospinal fluid sub images. Then, we obtained the cluster centers using FCM clustering algorithm. When detecting lesions, the CT images to be detected was automatically segmented into sub images, with a certain degree of over-segmenting allowed under the premise of ensuring accuracy as much as possible. Then we extended these sub images and extracted the features to compute the distances with the cluster centers and to determine whether they belonged to the three kinds of normal samples, or, otherwise, belonged to lesions. The proposed method was verified by experiments.

  20. Automatic, computer-based speech assessment on edentulous patients with and without complete dentures - preliminary results.

    PubMed

    Stelzle, F; Ugrinovic, B; Knipfer, C; Bocklet, T; Nöth, E; Schuster, M; Eitner, S; Seiss, M; Nkenke, E

    2010-03-01

    Dental rehabilitation of edentulous patients with complete dentures includes not only aesthetics and mastication of food, but also speech quality. It was the aim of this study to introduce and validate a computer-based speech recognition system (ASR) for automatic speech assessment in edentulous patients after dental rehabilitation with complete dentures. To examine the impact of dentures on speech production, the speech outcome of edentulous patients with and without complete dentures was compared. Twenty-eight patients reading a standardized text were recorded twice - with and without their complete dentures in situ. A control group of 40 healthy subjects with natural dentition was recorded under the same conditions. Speech quality was evaluated by means of a polyphone-based ASR according to the percentage of the word accuracy (WA). Speech acceptability assessment by expert listeners and the automatic rating of the WA by the ASR showed a high correlation (corr = 0.71). Word accuracy was significantly reduced in edentulous speakers (55.42 +/- 13.1) compared to the control group's WA (69.79 +/- 10.6). On the other hand, wearing complete dentures significantly increased the WA of the edentulous patients (60.00 +/- 15.6). Speech production quality is significantly reduced after complete loss of teeth. Reconstitution of speech production quality is an important part of dental rehabilitation and can be improved for edentulous patients by means of complete dentures. The ASR has proven to be a useful and easily applicable tool for automatic speech assessment in a standardized way.

  1. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  2. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  3. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    NASA Astrophysics Data System (ADS)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  4. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.

  5. Development of an automatic measuring device for total sugar content in chlortetracycline fermenter based on STM32

    NASA Astrophysics Data System (ADS)

    Liu, Ruochen; Chen, Xiangguang; Yao, Minpu; Huang, Suyi; Ma, Deshou; Zhou, Biao

    2017-01-01

    Because fermented liquid in chlortetracycline fermenter has high viscosity and complex composition, conventional instruments can't directly measure its total sugar content of fermented liquid. At present, offline artificial sampling measurement is usually the way to measuring total sugar content in chlortetracycline Fermenter. it will take too much time and manpower to finish the measurement., and the results will bring the lag of control process. To realize automatic measurement of total sugar content in chlortetracycline fermenter, we developed an automatic measuring device for total sugar content based on STM32 microcomputer. It can not only realize the function of automatic sampling, filtering, measuring of fermented liquid and automatic washing of the device, but also can make the measuring results display in the field and finish data communication. The experiment results show that the automatic measuring device of total sugar content in chlortetracycline fermenter can meet the demand of practical application.

  6. Automatic Recognition of Solar Features for Developing Data Driven Prediction Models of Solar Activity and Space Weather

    DTIC Science & Technology

    2013-05-01

    Aschwanden, M. J. 2005, Physics of the Solar Corona . An Introduction with Problems and Solutions (2nd edition), ed. Aschwanden, M. J. Balasubramaniam, K...AFRL-OSR-VA-TR-2013-0020 Automatic Recognition of Solar Features for Developing Data Driven Prediction Models of Solar Activity...Automatic Recognition of Solar Features for Developing Data Driven Prediction Models of Solar Activity and Space Weather 5a. CONTRACT NUMBER FA9550-09

  7. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  8. Automatic structure determination of regular polysaccharides based solely on NMR spectroscopy.

    PubMed

    Lundborg, Magnus; Fontana, Carolina; Widmalm, Göran

    2011-11-14

    The structural analysis of polysaccharides requires that the sugar components and their absolute configurations are determined. We here show that this can be performed based on NMR spectroscopy by utilizing butanolysis with (+)- and (-)-2-butanol that gives the corresponding 2-butyl glycosides with characteristic (1)H and (13)C NMR chemical shifts. The subsequent computer-assisted structural determination by CASPER can then be based solely on NMR data in a fully automatic way as shown and implemented herein. The method is additionally advantageous in that reference data only have to be prepared once and from a user's point of view only the unknown sample has to be derivatized for use in CASPER.

  9. Research on large spatial coordinate automatic measuring system based on multilateral method

    NASA Astrophysics Data System (ADS)

    Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui

    2015-10-01

    To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.

  10. An efficient automatic workload estimation method based on electrodermal activity using pattern classifier combinations.

    PubMed

    Ghaderyan, Peyvand; Abbasi, Ataollah

    2016-12-01

    Automatic workload estimation has received much attention because of its application in error prevention, diagnosis, and treatment of neural system impairment. The development of a simple but reliable method using minimum number of psychophysiological signals is a challenge in automatic workload estimation. To address this challenge, this paper presented three different decomposition techniques (Fourier, cepstrum, and wavelet transforms) to analyze electrodermal activity (EDA). The efficiency of various statistical and entropic features was investigated and compared. To recognize different levels of an arithmetic task, the features were processed by principal component analysis and machine-learning techniques. These methods have been incorporated into a workload estimation system based on two types: feature-level and decision-level combinations. The results indicated the reliability of the method for automatic and real-time inference of psychological states. This method provided a quantitative estimation of the workload levels and a bias-free evaluation approach. The high-average accuracy of 90% and cost effective requirement were the two important attributes of the proposed workload estimation system. New entropic features were proved to be more sensitive measures for quantifying time and frequency changes in EDA. The effectiveness of these measures was also compared with conventional tonic EDA measures, demonstrating the superiority of the proposed method in achieving accurate estimation of workload levels.

  11. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    PubMed

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively.

  12. An automatic recognition method of pointer instrument based on improved Hough transform

    NASA Astrophysics Data System (ADS)

    Xu, Li; Fang, Tian; Gao, Xiaoyu

    2015-10-01

    For the automatic recognition of pointer instrument, the method for the automatic recognition of pointer instrument based on improved Hough Transform was proposed in this paper. The automatic recognition of pointer instrument is applied to all kinds of lighting conditions, but the accuracy of it binaryzation will be influenced when the light is too strong or too dark. Therefore, the improved Ostu method was suggested to realize recognition for adaptive thresholding of pointer instrument under all kinds of lighting conditions. On the basis of dial image characteristics, Otsu method is used to get the value of maximum between-cluster variance and initial threshold than analyze its maximum between-cluster variance value to determine the light and shade of the image. When the images are too bright or too dark, the smaller pixels should be given up and then calculate the initial threshold by Otsu method again and again until the best binaryzation image was obtained. Hence, transform the pointer straight line of the binaryzation image to Hough parameter space through improved Hough Transform to determine the position of the pointer straight line by searching the maximum value of arrays of the same angle. Finally, according to angle method, the pointer reading was obtained by the linear relationship for the initial scale and angle of the pointer instrument. Results show that the improved Otsu method make pointer instrument possible to obtained the accuracy binaryzation image even though the light is too bright or too dark , which improves the adaptability of pointer instrument to automatic recognize the light under different conditions. For the pressure gauges with range of 60MPa, the relative error identification reached to 0.005 when use the improved Hough Transform Algorithm.

  13. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    SciTech Connect

    Schoot, A. J. A. J. van de Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A.; Hoogeman, M. S.; Chai, X.

    2014-03-15

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  14. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  15. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    NASA Astrophysics Data System (ADS)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  16. Noise Robust Feature Scheme for Automatic Speech Recognition Based on Auditory Perceptual Mechanisms

    NASA Astrophysics Data System (ADS)

    Cai, Shang; Xiao, Yeming; Pan, Jielin; Zhao, Qingwei; Yan, Yonghong

    Mel Frequency Cepstral Coefficients (MFCC) are the most popular acoustic features used in automatic speech recognition (ASR), mainly because the coefficients capture the most useful information of the speech and fit well with the assumptions used in hidden Markov models. As is well known, MFCCs already employ several principles which have known counterparts in the peripheral properties of human hearing: decoupling across frequency, mel-warping of the frequency axis, log-compression of energy, etc. It is natural to introduce more mechanisms in the auditory periphery to improve the noise robustness of MFCC. In this paper, a k-nearest neighbors based frequency masking filter is proposed to reduce the audibility of spectra valleys which are sensitive to noise. Besides, Moore and Glasberg's critical band equivalent rectangular bandwidth (ERB) expression is utilized to determine the filter bandwidth. Furthermore, a new bandpass infinite impulse response (IIR) filter is proposed to imitate the temporal masking phenomenon of the human auditory system. These three auditory perceptual mechanisms are combined with the standard MFCC algorithm in order to investigate their effects on ASR performance, and a revised MFCC extraction scheme is presented. Recognition performances with the standard MFCC, RASTA perceptual linear prediction (RASTA-PLP) and the proposed feature extraction scheme are evaluated on a medium-vocabulary isolated-word recognition task and a more complex large vocabulary continuous speech recognition (LVCSR) task. Experimental results show that consistent robustness against background noise is achieved on these two tasks, and the proposed method outperforms both the standard MFCC and RASTA-PLP.

  17. An image feature-based approach to automatically find images for application to clinical decision support.

    PubMed

    Stanley, R Joe; De, Soumya; Demner-Fushman, Dina; Antani, Sameer; Thoma, George R

    2011-07-01

    The illustrations in biomedical publications often provide useful information in aiding clinicians' decisions when full text searching is performed to find evidence in support of a clinical decision. In this research, image analysis and classification techniques are explored to automatically extract information for differentiating specific modalities to characterize illustrations in biomedical publications, which may assist in the evidence finding process. Global, histogram-based, and texture image illustration features were compared to basis function luminance histogram correlation features for modality-based discrimination over a set of 742 manually annotated images by modality (radiological, photo, etc.) selected from the 2004-2005 issues of the British Journal of Oral and Maxillofacial Surgery. Using a mean shifting supervised clustering technique, automatic modality-based discrimination results as high as 95.57% were obtained using the basis function features. These results compared favorably to other feature categories examined. The experimental results show that image-based features, particularly correlation-based features, can provide useful modality discrimination information.

  18. A transition-constrained discrete hidden Markov model for automatic sleep staging

    PubMed Central

    2012-01-01

    Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG) recordings including electroencephalograms (EEGs), electrooculograms (EOGs) and electromyograms (EMGs), are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K) rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM), and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%), and the least-accurately classified stage was S1 (<34%). In the majority of cases, S1 was classified as Wake (21%), S2 (33%) or REM sleep (12%), consistent with previous studies. However, the total time of S1 in the 20 all-night sleep recordings was less than 4%. Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies. PMID:22908930

  19. Automatic segmentation and statistical shape modeling of the paranasal sinuses to estimate natural variations

    NASA Astrophysics Data System (ADS)

    Sinha, Ayushi; Leonard, Simon; Reiter, Austin; Ishii, Masaru; Taylor, Russell H.; Hager, Gregory D.

    2016-03-01

    We present an automatic segmentation and statistical shape modeling system for the paranasal sinuses which allows us to locate structures in and around the sinuses, as well as to observe the variability in these structures. This system involves deformably registering a given patient image to a manually segmented template image, and using the resulting deformation field to transfer labels from the template to the patient image. We use 3D snake splines to correct errors in this initial segmentation. Once we have several accurately segmented images, we build statistical shape models to observe the population mean and variance for each structure. These shape models are useful to us in several ways. Regular registration methods are insufficient to accurately register pre-operative computed tomography (CT) images with intra-operative endoscopy video of the sinuses. This is because of deformations that occur in structures containing erectile tissue. Our aim is to estimate these deformations using our shape models in order to improve video-CT registration, as well as to distinguish normal variations in anatomy from abnormal variations, and automatically detect and stage pathology. We can also compare the mean shapes and variances in different populations, such as different genders or ethnicities, in order to observe differences and similarities, as well as in different age groups in order to observe the developmental changes that occur in the sinuses.

  20. FPGA based system for automatic cDNA microarray image processing.

    PubMed

    Belean, Bogdan; Borda, Monica; Le Gal, Bertrand; Terebes, Romulus

    2012-07-01

    Automation is an open subject in DNA microarray image processing, aiming reliable gene expression estimation. The paper presents a novel shock filter based approach for automatic microarray grid alignment. The proposed method brings up significantly reduced computational complexity compared to state of the art approaches, while similar results in terms of accuracy are achieved. Based on this approach, we also propose an FPGA based system for microarray image analysis that eliminates the shortcomings of existing software platforms: user intervention, increased computational time and cost. Our system includes application-specific architectures which involve algorithm parallelization, aiming fast and automated cDNA microarray image processing. The proposed automated image processing chain is implemented both on a general purpose processor and using the developed hardware architectures as co-processors in a FPGA based system. The comparative results included in the last section show that an important gain in terms of computational time is obtained using hardware based implementations.

  1. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction

    NASA Astrophysics Data System (ADS)

    Sebari, Imane; He, Dong-Chen

    2013-05-01

    We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.

  2. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  3. Automatic method for building indoor boundary models from dense point clouds collected by laser scanners.

    PubMed

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-11-22

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.

  4. SVM-based classification selection algorithm for the automatic selection of guide star

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Xiong, Chengyi; Wu, Weiren; Tian, Jinwen; Liu, Jian

    2003-09-01

    A new general method of the automatic selection of guide star, which based on a new dynamic Visual Magnitude Threshold (VMT) hyper-plane and the Support Vector Machines (SVM), is introduced. The high dimensional nonlinear VMT plane can be easily obtained by using the SVM, then the guide star sets are generated by the SVM classifier. The experiment results demonstrate that the catalog obtained by the proposed algorithm has a lot of advantages including, fewer total numbers, smaller catalog size and better distribution uniformity.

  5. Automatic brain matter segmentation of computed tomography images using a statistical model: A tool to gain working time!

    PubMed Central

    Bertè, Francesco; Lamponi, Giuseppe; Bramanti, Placido

    2015-01-01

    Brain computed tomography (CT) is useful diagnostic tool for the evaluation of several neurological disorders due to its accuracy, reliability, safety and wide availability. In this field, a potentially interesting research topic is the automatic segmentation and recognition of medical regions of interest (ROIs). Herein, we propose a novel automated method, based on the use of the active appearance model (AAM) for the segmentation of brain matter in CT images to assist radiologists in the evaluation of the images. The method described, that was applied to 54 CT images coming from a sample of outpatients affected by cognitive impairment, enabled us to obtain the generation of a model overlapping with the original image with quite good precision. Since CT neuroimaging is in widespread use for detecting neurological disease, including neurodegenerative conditions, the development of automated tools enabling technicians and physicians to reduce working time and reach a more accurate diagnosis is needed. PMID:26427894

  6. Automatic brain matter segmentation of computed tomography images using a statistical model: A tool to gain working time!

    PubMed

    Bertè, Francesco; Lamponi, Giuseppe; Bramanti, Placido; Calabrò, Rocco S

    2015-10-01

    Brain computed tomography (CT) is useful diagnostic tool for the evaluation of several neurological disorders due to its accuracy, reliability, safety and wide availability. In this field, a potentially interesting research topic is the automatic segmentation and recognition of medical regions of interest (ROIs). Herein, we propose a novel automated method, based on the use of the active appearance model (AAM) for the segmentation of brain matter in CT images to assist radiologists in the evaluation of the images. The method described, that was applied to 54 CT images coming from a sample of outpatients affected by cognitive impairment, enabled us to obtain the generation of a model overlapping with the original image with quite good precision. Since CT neuroimaging is in widespread use for detecting neurological disease, including neurodegenerative conditions, the development of automated tools enabling technicians and physicians to reduce working time and reach a more accurate diagnosis is needed.

  7. Automatic calibration of a parsimonious ecohydrological model in a sparse basin using the spatio-temporal variation of the NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2016-04-01

    Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for searching new sources of information such as satellite data. The advantages of the use of satellite data in dry regions has been deeply demonstrated and studied. But, the use of this kind of data forces to introduce the concept of spatio-temporal information. In this context, we have to deal with the fact that there is a lack in terms of statistics and methodologies to incorporate the spatio-temporal data during the calibration and validation processes. This research wants to be a contribution in that sense. The used ecohydrological model was calibrated in the Upper Ewaso river basin in Kenya only using NDVI (Normalized Difference Vegetation Index) data from MODIS. An automatic calibration methodology based on Singular Value Decomposition techniques was proposed in order to calibrate the model taking into account the temporal variation and, also, the spatial pattern of the observed NDVI and the simulated LAI. The obtained results have demonstrated: (1) the satellite data is an extraordinary useful tool of information and it can be used to implement ecohydrological models in dry regions; (2) the proposed model calibrated only using satellite data is able to reproduce the vegetation dynamics (in time and in space) and, also, the observed discharge at the outlet point; and (3) the proposed automatic calibration methodology works satisfactorily and it includes spatio-temporal data, in other words, it takes into account the temporal variation and the spatial pattern of the analyzed data.

  8. Automatic Atlas Based Electron Density and Structure Contouring for MRI-based Prostate Radiation Therapy on the Cloud

    NASA Astrophysics Data System (ADS)

    Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.

    2014-03-01

    Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.

  9. Biothermal Model of Patient and Automatic Control System of Brain Temperature for Brain Hypothermia Treatment

    NASA Astrophysics Data System (ADS)

    Wakamatsu, Hidetoshi; Gaohua, Lu

    Various surface-cooling apparatus such as the cooling cap, muffler and blankets have been commonly used for the cooling of the brain to provide hypothermic neuro-protection for patients of hypoxic-ischemic encephalopathy. The present paper is aimed at the brain temperature regulation from the viewpoint of automatic system control, in order to help clinicians decide an optimal temperature of the cooling fluid provided for these three types of apparatus. At first, a biothermal model characterized by dynamic ambient temperatures is constructed for adult patient, especially on account of the clinical practice of hypothermia and anesthesia in the brain hypothermia treatment. Secondly, the model is represented by the state equation as a lumped parameter linear dynamic system. The biothermal model is justified from their various responses corresponding to clinical phenomena and treatment. Finally, the optimal regulator is tentatively designed to give clinicians some suggestions on the optimal temperature regulation of the patient’s brain. It suggests the patient’s brain temperature could be optimally controlled to follow-up the temperature process prescribed by the clinicians. This study benefits us a great clinical possibility for the automatic hypothermia treatment.

  10. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  11. Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.

    PubMed

    Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L

    2011-01-01

    Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.

  12. Automatic classifier based on heart rate variability to identify fallers among hypertensive subjects

    PubMed Central

    Jovic, Alan; De Luca, Nicola; Pecchia, Leandro

    2015-01-01

    Accidental falls are a major problem of later life. Different technologies to predict falls have been investigated, but with limited success, mainly because of low specificity due to a high false positive rate. This Letter presents an automatic classifier based on heart rate variability (HRV) analysis with the goal to identify fallers automatically. HRV was used in this study as it is considered a good estimator of autonomic nervous system (ANS) states, which are responsible, among other things, for human balance control. Nominal 24 h electrocardiogram recordings from 168 cardiac patients (age 72 ± 8 years, 60 female), of which 47 were fallers, were investigated. Linear and nonlinear HRV properties were analysed in 30 min excerpts. Different data mining approaches were adopted and their performances were compared with a subject-based receiver operating characteristic analysis. The best performance was achieved by a hybrid algorithm, RUSBoost, integrated with feature selection method based on principal component analysis, which achieved satisfactory specificity and accuracy (80 and 72%, respectively), but low sensitivity (51%). These results suggested that ANS states causing falls could be reliably detected, but also that not all the falls were due to ANS states. PMID:26609412

  13. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    PubMed

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  14. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  15. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  16. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  17. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  18. Towards an automatic statistical model for seasonal precipitation prediction and its application to Central and South Asian headwater catchments

    NASA Astrophysics Data System (ADS)

    Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could

  19. Automatic representation of urban terrain models for simulations on the example of VBS2

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Häufel, Gisela; Solbrig, Peter; Wernerus, Peter

    2014-10-01

    Virtual simulations have been on the rise together with the fast progress of rendering engines and graphics hardware. Especially in military applications, offensive actions in modern peace-keeping missions have to be quick, firm and precise, especially under the conditions of asymmetric warfare, non-cooperative urban terrain and rapidly developing situations. Going through the mission in simulation can prepare the minds of soldiers and leaders, increase selfconfidence and tactical awareness, and finally save lives. This work is dedicated to illustrate the potential and limitations of integration of semantic urban terrain models into a simulation. Our system of choice is Virtual Battle Space 2, a simulation system created by Bohemia Interactive System. The topographic object types that we are able to export into this simulation engine are either results of the sensor data evaluation (building, trees, grass, and ground), which is done fully-automatically, or entities obtained from publicly available sources (streets and water-areas), which can be converted into the system-proper format with a few mouse clicks. The focus of this work lies in integrating of information about building façades into the simulation. We are inspired by state-of the art methods that allow for automatic extraction of doors and windows in laser point clouds captured from building walls and thus increase the level of details of building models. As a consequence, it is important to simulate these animationable entities. Doing so, we are able to make accessible some of the buildings in the simulation.

  20. Automatic Training Sample Selection for a Multi-Evidence Based Crop Classification Approach

    NASA Astrophysics Data System (ADS)

    Chellasamy, M.; Ferre, P. A. Ty; Humlekrog Greve, M.

    2014-09-01

    An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples. The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed approach is named as ECRA (Ensemble based Cluster Refinement Approach). ECRA first automatically removes mislabeled samples and then selects the refined training samples in an iterative training-reclassification scheme. Mislabel removal is based on the expectation that mislabels in each class will be far from cluster centroid. However, this must be a soft constraint, especially when working with a hypothesis space that does not contain a good approximation of the targets classes. Difficulty in finding a good approximation often exists either due to less informative data or a large hypothesis space. Thus this approach uses the spectral, texture and indices domains in an ensemble framework to iteratively remove the mislabeled pixels from the crop clusters declared by the farmers. Once the clusters are refined, the selected border samples are used for final learning and the unknown samples are classified using the multi-evidence approach. The study is implemented with WorldView-2 multispectral imagery acquired for a study area containing 10 crop classes. The proposed

  1. [Automatic houses detection with color aerial images based on image segmentation].

    PubMed

    He, Pei-Pei; Wan, You-Chuan; Jiang, Peng-Rui; Gao, Xian-Jun; Qin, Jia-Xin

    2014-07-01

    In order to achieve housing automatic detection from high-resolution aerial imagery, the present paper utilized the color information and spectral characteristics of the roofing material, with the image segmentation theory, to study the housing automatic detection method. Firstly, This method proposed in this paper converts the RGB color space to HIS color space, uses the characteristics of each component of the HIS color space and the spectral characteristics of the roofing material for image segmentation to isolate red tiled roofs and gray cement roof areas, and gets the initial segmentation housing areas by using the marked watershed algorithm. Then, region growing is conducted in the hue component with the seed segment sample by calculating the average hue in the marked region. Finally through the elimination of small spots and rectangular fitting process to obtain a clear outline of the housing area. Compared with the traditional pixel-based region segmentation algorithm, the improved method proposed in this paper based on segment growing is in a one-dimensional color space to reduce the computation without human intervention, and can cater to the geometry information of the neighborhood pixels so that the speed and accuracy of the algorithm has been significantly improved. A case study was conducted to apply the method proposed in this paper to high resolution aerial images, and the experimental results demonstrate that this method has a high precision and rational robustness.

  2. Automatic information timeliness assessment of diabetes web sites by evidence based medicine.

    PubMed

    Sağlam, Rahime Belen; Taşkaya Temizel, Tuğba

    2014-11-01

    Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites.

  3. High throughput and automatic colony formation assay based on impedance measurement technique.

    PubMed

    Lei, Kin Fong; Kao, Chich-Hao; Tsang, Ngan-Ming

    2017-03-02

    To predict the response of in vivo tumors, in vitro culture of cell colonies was suggested to be a standard assay to achieve high clinical relevance. To describe the responses of cell colonies, the most widely used quantification method is to count the number and size of cell colonies under microscope. That makes the colony formation assay infeasible to be high throughput and automated. In this work, in situ analysis of cell colonies suspended in soft hydrogel was developed based on impedance measurement technique. Cell colonies cultured between a pair of parallel plate electrodes were successfully analyzed by coating a layer of base hydrogel on one side of electrode. Real-time and label-free monitoring of cell colonies was realized during the culture course. Impedance magnitude and phase angle respectively represented the summation effect of colony responses and size of colonies. In addition, dynamic response of drug-treated colonies was demonstrated. High throughput and automatic colony formation assay was realized to facilitate more objective assessments in cancer research. Graphical Abstract High throughput and automatic colony formation assay was realized by in situ impedimetric analysis across a pair of parallel plate electrodes in a culture chamber. Cell colonies suspended in soft hydrogel were cultured under the tested substance and their dynamic response was represented by impedance data.

  4. Automatic NMR-based identification of chemical reaction types in mixtures of co-occurring reactions.

    PubMed

    Latino, Diogo A R S; Aires-de-Sousa, João

    2014-01-01

    The combination of chemoinformatics approaches with NMR techniques and the increasing availability of data allow the resolution of problems far beyond the original application of NMR in structure elucidation/verification. The diversity of applications can range from process monitoring, metabolic profiling, authentication of products, to quality control. An application related to the automatic analysis of complex mixtures concerns mixtures of chemical reactions. We encoded mixtures of chemical reactions with the difference between the (1)H NMR spectra of the products and the reactants. All the signals arising from all the reactants of the co-occurring reactions were taken together (a simulated spectrum of the mixture of reactants) and the same was done for products. The difference spectrum is taken as the representation of the mixture of chemical reactions. A data set of 181 chemical reactions was used, each reaction manually assigned to one of 6 types. From this dataset, we simulated mixtures where two reactions of different types would occur simultaneously. Automatic learning methods were trained to classify the reactions occurring in a mixture from the (1)H NMR-based descriptor of the mixture. Unsupervised learning methods (self-organizing maps) produced a reasonable clustering of the mixtures by reaction type, and allowed the correct classification of 80% and 63% of the mixtures in two independent test sets of different similarity to the training set. With random forests (RF), the percentage of correct classifications was increased to 99% and 80% for the same test sets. The RF probability associated to the predictions yielded a robust indication of their reliability. This study demonstrates the possibility of applying machine learning methods to automatically identify types of co-occurring chemical reactions from NMR data. Using no explicit structural information about the reactions participants, reaction elucidation is performed without structure elucidation of

  5. Automatic machine learning based prediction of cardiovascular events in lung cancer screening data

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; de Jong, Pim A.; Wolterink, Jelmer M.; Vliegenthart, Rozemarijn; Wielingen, Geoffrey V. F.; Viergever, Max A.; Išgum, Ivana

    2015-03-01

    Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.

  6. Automatic subthalamic nucleus detection from microelectrode recordings based on noise level and neuronal activity

    NASA Astrophysics Data System (ADS)

    Cagnan, Hayriye; Dolan, Kevin; He, Xuan; Fiorella Contarino, Maria; Schuurman, Richard; van den Munckhof, Pepijn; Wadman, Wytse J.; Bour, Lo; Martens, Hubert C. F.

    2011-08-01

    Microelectrode recording (MER) along surgical trajectories is commonly applied for refinement of the target location during deep brain stimulation (DBS) surgery. In this study, we utilize automatically detected MER features in order to locate the subthalamic nucleus (STN) employing an unsupervised algorithm. The automated algorithm makes use of background noise level, compound firing rate and power spectral density along the trajectory and applies a threshold-based method to detect the dorsal and the ventral borders of the STN. Depending on the combination of measures used for detection of the borders, the algorithm allocates confidence levels for the annotation made (i.e. high, medium and low). The algorithm has been applied to 258 trajectories obtained from 84 STN DBS implantations. MERs used in this study have not been pre-selected or pre-processed and include all the viable measurements made. Out of 258 trajectories, 239 trajectories were annotated by the surgical team as containing the STN versus 238 trajectories by the automated algorithm. The agreement level between the automatic annotations and the surgical annotations is 88%. Taking the surgical annotations as the golden standard, across all trajectories, the algorithm made true positive annotations in 231 trajectories, true negative annotations in 12 trajectories, false positive annotations in 7 trajectories and false negative annotations in 8 trajectories. We conclude that our algorithm is accurate and reliable in automatically identifying the STN and locating the dorsal and ventral borders of the nucleus, and in a near future could be implemented for on-line intra-operative use.

  7. Wavelet-based automatic determination of the P- and S-wave arrivals

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.

    2013-12-01

    The detection of P- and S-wave arrivals is important for a variety of seismological applications including earthquake detection and characterization, and seismic tomography problems such as imaging of hydrocarbon reservoirs. For many years, dedicated human-analysts manually selected the arrival times of P and S waves. However, with the rapid expansion of seismic instrumentation, automatic techniques that can process a large number of seismic traces are becoming essential in tomographic applications, and for earthquake early-warning systems. In this work, we present a pair of algorithms for efficient picking of P and S onset times. The algorithms are based on the continuous wavelet transform of the seismic waveform that allows examination of a signal in both time and frequency domains. Unlike Fourier transform, the basis functions are localized in time and frequency, therefore, wavelet decomposition is suitable for analysis of non-stationary signals. For detecting the P-wave arrival, the wavelet coefficients are calculated using the vertical component of the seismogram, and the onset time of the wave is identified. In the case of the S-wave arrival, we take advantage of the polarization of the shear waves, and cross-examine the wavelet coefficients from the two horizontal components. In addition to the onset times, the automatic picking program provides estimates of uncertainty, which are important for subsequent applications. The algorithms are tested with synthetic data that are generated to include sudden changes in amplitude, frequency, and phase. The performance of the wavelet approach is further evaluated using real data by comparing the automatic picks with manual picks. Our results suggest that the proposed algorithms provide robust measurements that are comparable to manual picks for both P- and S-wave arrivals.

  8. Automatic generation of predictive dynamic models reveals nuclear phosphorylation as the key Msn2 control mechanism.

    PubMed

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-05-28

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. We describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model and automatically generates a set of simpler models compatible with observational data. As a proof of principle, we analyzed the dynamic control of the transcription factor Msn2 in Saccharomyces cerevisiae, specifically the short-term mechanisms mediating the cells' recovery after release from starvation stress. Our method determined that 12 of 192 possible models were compatible with available Msn2 localization data. Iterations between model predictions and rationally designed phosphoproteomics and imaging experiments identified a single-circuit topology with a relative probability of 99% among the 192 models. Model analysis revealed that the coupling of dynamic phenomena in Msn2 phosphorylation and transport could lead to efficient stress response signaling by establishing a rate-of-change sensor. Similar principles could apply to mammalian stress response pathways. Systematic construction of dynamic models may yield detailed insight into nonobvious molecular mechanisms.

  9. Automatic Generation of 3D Caricatures Based on Artistic Deformation Styles.

    PubMed

    Clarke, Lyndsey; Chen, Min; Mora, Benjamin

    2011-06-01

    Caricatures are a form of humorous visual art, usually created by skilled artists for the intention of amusement and entertainment. In this paper, we present a novel approach for automatic generation of digital caricatures from facial photographs, which capture artistic deformation styles from hand-drawn caricatures. We introduced a pseudo stress-strain model to encode the parameters of an artistic deformation style using "virtual" physical and material properties. We have also developed a software system for performing the caricaturistic deformation in 3D which eliminates the undesirable artifacts in 2D caricaturization. We employed a Multilevel Free-Form Deformation (MFFD) technique to optimize a 3D head model reconstructed from an input facial photograph, and for controlling the caricaturistic deformation. Our results demonstrated the effectiveness and usability of the proposed approach, which allows ordinary users to apply the captured and stored deformation styles to a variety of facial photographs.

  10. Automatic atlas-based three-label cartilage segmentation from MR knee images.

    PubMed

    Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc

    2014-10-01

    Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces - for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset.

  11. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  12. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    SciTech Connect

    Krishnan, Venkat; Das, Trishna

    2016-05-01

    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24 bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.

  13. Comprehensive physiological cardiovascular model enables automatic correction of hemodynamics in patients with acute life-threatening heart failure.

    PubMed

    Uemura, Kazunori; Kamiya, Atsunori; Shimizu, Shuji; Shishido, Toshiaki; Sugimachi, Masaru; Sunagawa, Kenji

    2006-01-01

    Saving life of patients with acute life-threatening heart failure is a major challenge. One has to correct several fatal hemodynamic abnormalities at the same time within a limited time frame. The formulation of such complicated treatments enables the development of a system that can be used to save automatically lives of patients with acute heart failure, an autopilot system. To accomplish this, we established a comprehensive physiological cardiovascular model, on which we based the design of the autopilot system. By translating hemodynamics into cardiovascular parameters (pumping ability, vascular resistance, blood volume), and by controlling each of these with individual drugs, we were able to correct blood pressure, cardiac output, and left atrial pressure to the target values rapidly (5.2 +/- 6.6, 6.8 +/- 4.6, and 11.7 +/- 9.8 minutes), stably, and simultaneously.

  14. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  15. Developing a Satellite Based Automatic System for Crop Monitoring: Kenya's Great Rift Valley, A Case Study

    NASA Astrophysics Data System (ADS)

    Lucciani, Roberto; Laneve, Giovanni; Jahjah, Munzer; Mito, Collins

    2016-08-01

    The crop growth stage represents essential information for agricultural areas management. In this study we investigate the feasibility of a tool based on remotely sensed satellite (Landsat 8) imagery, capable of automatically classify crop fields and how much resolution enhancement based on pan-sharpening techniques and phenological information extraction, useful to create decision rules that allow to identify semantic class to assign to an object, can effectively support the classification process. Moreover we investigate the opportunity to extract vegetation health status information from remotely sensed assessment of the equivalent water thickness (EWT). Our case study is the Kenya's Great Rift valley, in this area a ground truth campaign was conducted during August 2015 in order to collect crop fields GPS measurements, leaf area index (LAI) and chlorophyll samples.

  16. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  17. Low-complexity PDE-based approach for automatic microarray image processing.

    PubMed

    Belean, Bogdan; Terebes, Romulus; Bot, Adrian

    2015-02-01

    Microarray image processing is known as a valuable tool for gene expression estimation, a crucial step in understanding biological processes within living organisms. Automation and reliability are open subjects in microarray image processing, where grid alignment and spot segmentation are essential processes that can influence the quality of gene expression information. The paper proposes a novel partial differential equation (PDE)-based approach for fully automatic grid alignment in case of microarray images. Our approach can handle image distortions and performs grid alignment using the vertical and horizontal luminance function profiles. These profiles are evolved using a hyperbolic shock filter PDE and then refined using the autocorrelation function. The results are compared with the ones delivered by state-of-the-art approaches for grid alignment in terms of accuracy and computational complexity. Using the same PDE formalism and curve fitting, automatic spot segmentation is achieved and visual results are presented. Considering microarray images with different spots layouts, reliable results in terms of accuracy and reduced computational complexity are achieved, compared with existing software platforms and state-of-the-art methods for microarray image processing.

  18. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; van Huis, Jasper R.; Dijk, Judith; van Rest, Jeroen H. C.

    2014-10-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snatch is a subtle action, and because collaboration is complex social behavior. We carried out an experiment with more than 20 validated pickpocket incidents. We used a top-down approach to translate expert knowledge in features and rules, and a bottom-up approach to learn discriminating patterns with a classifier. The classifier was used to separate the pickpockets from normal passers-by who are shopping in the mall. We performed a cross validation to train and evaluate our system. In this paper, we describe our method, identify the most valuable features, and analyze the results that were obtained in the experiment. We estimate the quality of these features and the performance of automatic detection of (collaborating) pickpockets. The results show that many of the pickpockets can be detected at a low false alarm rate.

  19. Toward a multi-sensor-based approach to automatic text classification

    SciTech Connect

    Dasigi, V.R.; Mann, R.C.

    1995-10-01

    Many automatic text indexing and retrieval methods use a term-document matrix that is automatically derived from the text in question. Latent Semantic Indexing is a method, recently proposed in the Information Retrieval (IR) literature, for approximating a large and sparse term-document matrix with a relatively small number of factors, and is based on a solid mathematical foundation. LSI appears to be quite useful in the problem of text information retrieval, rather than text classification. In this report, we outline a method that attempts to combine the strength of the LSI method with that of neural networks, in addressing the problem of text classification. In doing so, we also indicate ways to improve performance by adding additional {open_quotes}logical sensors{close_quotes} to the neural network, something that is hard to do with the LSI method when employed by itself. The various programs that can be used in testing the system with TIPSTER data set are described. Preliminary results are summarized, but much work remains to be done.

  20. Automatic epileptic seizure detection in EEGs based on line length feature and artificial neural networks.

    PubMed

    Guo, Ling; Rivero, Daniel; Dorado, Julián; Rabuñal, Juan R; Pazos, Alejandro

    2010-08-15

    About 1% of the people in the world suffer from epilepsy. The main characteristic of epilepsy is the recurrent seizures. Careful analysis of the electroencephalogram (EEG) recordings can provide valuable information for understanding the mechanisms behind epileptic disorders. Since epileptic seizures occur irregularly and unpredictably, automatic seizure detection in EEG recordings is highly required. Wavelet transform (WT) is an effective analysis tool for non-stationary signals, such as EEGs. The line length feature reflects the waveform dimensionality changes and is a measure sensitive to variation of the signal amplitude and frequency. This paper presents a novel method for automatic epileptic seizure detection, which uses line length features based on wavelet transform multiresolution decomposition and combines with an artificial neural network (ANN) to classify the EEG signals regarding the existence of seizure or not. To the knowledge of the authors, there exists no similar work in the literature. A famous public dataset was used to evaluate the proposed method. The high accuracy obtained for three different classification problems testified the great success of the method.

  1. Automatic decomposition of a complex hologram based on the virtual diffraction plane framework

    NASA Astrophysics Data System (ADS)

    Jiao, A. S. M.; Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Lee, C.-C.; Lam, Y. K.

    2014-07-01

    Holography is a technique for capturing the hologram of a three-dimensional scene. In many applications, it is often pertinent to retain specific items of interest in the hologram, rather than retaining the full information, which may cause distraction in the analytical process that follows. For a real optical image that is captured with a camera or scanner, this process can be realized by applying image segmentation algorithms to decompose an image into its constituent entities. However, because it is different from an optical image, classic image segmentation methods cannot be applied directly to a hologram, as each pixel in the hologram carries holistic, rather than local, information of the object scene. In this paper, we propose a method to perform automatic decomposition of a complex hologram based on a recently proposed technique called the virtual diffraction plane (VDP) framework. Briefly, a complex hologram is back-propagated to a hypothetical plane known as the VDP. Next, the image on the VDP is automatically decomposed, through the use of the segmentation on the magnitude of the VDP image, into multiple sub-VDP images, each representing the diffracted waves of an isolated entity in the scene. Finally, each sub-VDP image is reverted back to a hologram. As such, a complex hologram can be decomposed into a plurality of subholograms, each representing a discrete object in the scene. We have demonstrated the successful performance of our proposed method by decomposing a complex hologram that is captured through the optical scanning holography (OSH) technique.

  2. A simulation-based approach towards automatic target recognition of high resolution space borne radar signatures

    NASA Astrophysics Data System (ADS)

    Anglberger, H.; Kempf, T.

    2016-10-01

    Specific imaging effects that are caused mainly by the range measurement principle of a radar device, its much lower frequency range as compared to the optical spectrum, the slanted imaging geometry and certainly the limited spatial resolution complicates the interpretation of radar signatures decisively. Especially the coherent image formation which causes unwanted speckle noise aggravates the problem of visually recognizing target objects. Fully automatic approaches with acceptable false alarm rates are therefore an even harder challenge. At the Microwaves and Radar Institute of the German Aerospace Center (DLR) the development of methods to implement a robust overall processing workflow for automatic target recognition (ATR) out of high resolution synthetic aperture radar (SAR) image data is under progress. The heart of the general approach is to use time series exploitation for the former detection step and simulation-based signature matching for the subsequent recognition. This paper will show the overall ATR chain as a proof of concept for the special case of airplane recognition on image data from the space borne SAR sensor TerraSAR-X.

  3. Automatic detection of optic disc based on PCA and mathematical morphology.

    PubMed

    Morales, Sandra; Naranjo, Valery; Angulo, Us; Alcaniz, Mariano

    2013-04-01

    The algorithm proposed in this paper allows to automatically segment the optic disc from a fundus image. The goal is to facilitate the early detection of certain pathologies and to fully automate the process so as to avoid specialist intervention. The method proposed for the extraction of the optic disc contour is mainly based on mathematical morphology along with principal component analysis (PCA). It makes use of different operations such as generalized distance function (GDF), a variant of the watershed transformation, the stochastic watershed, and geodesic transformations. The input of the segmentation method is obtained through PCA. The purpose of using PCA is to achieve the grey-scale image that better represents the original RGB image. The implemented algorithm has been validated on five public databases obtaining promising results. The average values obtained (a Jaccard's and Dice's coefficients of 0.8200 and 0.8932, respectively, an accuracy of 0.9947, and a true positive and false positive fractions of 0.9275 and 0.0036) demonstrate that this method is a robust tool for the automatic segmentation of the optic disc. Moreover, it is fairly reliable since it works properly on databases with a large degree of variability and improves the results of other state-of-the-art methods.

  4. A hybrid model for automatic identification of risk factors for heart disease.

    PubMed

    Yang, Hui; Garibaldi, Jonathan M

    2015-12-01

    Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task.

  5. Automatic laboratory-based strategy to improve the diagnosis of type 2 diabetes in primary care

    PubMed Central

    Salinas, Maria; López-Garrigós, Maite; Flores, Emilio; Leiva-Salinas, Maria; Lugo, Javier; Pomares, Francisco J; Asencio, Alberto; Ahumada, Miguel; Leiva-Salinas, Carlos

    2016-01-01

    Introduction To study the pre-design and success of a strategy based on the addition of hemoglobin A1c (HbA1c) in the blood samples of certain primary care patients to detect new cases of type 2 diabetes. Materials and methods In a first step, we retrospectively calculated the number of HbA1c that would have been measured in one year if HbA1c would have been processed, according to the guidelines of the American Diabetes Association (ADA). Based on those results we decided to prospectively measure HbA1c in every primary care patient above 45 years, with no HbA1c in the previous 3 years, and glucose concentration between 5.6-6.9 mmol/L, during an 18 months period. We calculated the number of HbA1c that were automatically added by the LIS based on our strategy, we evaluated the medical record of such subjects to confirm whether type 2 diabetes was finally confirmed, and we calculated the cost of our intervention. Results In a first stage, according to the guidelines, Hb1Ac should have been added to the blood samples of 13,085 patients, resulting in a cost of 14,973€. In the prospective study, the laboratory added Hb1Ac to 2092 patients, leading to an expense of 2393€. 314 patients had an HbA1c value ≥ 6.5% (48 mmol/mol). 82 were finally diagnosed as type 2 diabetes; 28 thanks to our strategy, with an individual cost of 85.4€; and 54 due to the request of HbA1c by the general practitioners (GPs), with a cost of 47.5€. Conclusion The automatic laboratory-based strategy detected patients with type 2 diabetes in primary care, at a cost of 85.4€ per new case. PMID:26981026

  6. Automatic lumbar vertebrae detection based on feature fusion deep learning for partial occluded C-arm X-ray images.

    PubMed

    Li, Yang; Liang, Wei; Zhang, Yinlong; An, Haibo; Tan, Jindong; Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan; Li, Yang; Liang, Wei; Tan, Jindong; Zhang, Yinlong; An, Haibo

    2016-08-01

    Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.

  7. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  8. Automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Espy-Wilson, Carol

    2005-04-01

    Great strides have been made in the development of automatic speech recognition (ASR) technology over the past thirty years. Most of this effort has been centered around the extension and improvement of Hidden Markov Model (HMM) approaches to ASR. Current commercially-available and industry systems based on HMMs can perform well for certain situational tasks that restrict variability such as phone dialing or limited voice commands. However, the holy grail of ASR systems is performance comparable to humans-in other words, the ability to automatically transcribe unrestricted conversational speech spoken by an infinite number of speakers under varying acoustic environments. This goal is far from being reached. Key to the success of ASR is effective modeling of variability in the speech signal. This tutorial will review the basics of ASR and the various ways in which our current knowledge of speech production, speech perception and prosody can be exploited to improve robustness at every level of the system.

  9. Automatic image-to-world registration based on x-ray projections in cone-beam CT-guided interventions.

    PubMed

    Hamming, N M; Daly, M J; Irish, J C; Siewerdsen, J H

    2009-05-01

    improvement in precision was observed-specifically, the standard deviation in TRE was 0.2 mm for the automatic technique versus 0.34 mm for the manual technique (p = 0.001). The projection-based automatic registration technique demonstrates accuracy and reproducibility equivalent or superior to the conventional manual technique for both neurosurgical and head and neck marker configurations. Use of this method with C-arm CBCT eliminates the burden of manual registration on surgical workflow by providing automatic registration of surgical tracking in 3D images within approximately 20 s of acquisition, with registration automatically updated with each CBCT scan. The automatic registration method is undergoing integration in ongoing clinical trials of intraoperative CBCT-guided head and neck surgery.

  10. Group-wise automatic mesh-based analysis of cortical thickness

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Cody Hazlett, Heather; Niethammer, Marc; Oguz, Ipek; Cates, Joshua; Whitaker, Ross; Piven, Joseph; Styner, Martin

    2011-03-01

    The analysis of neuroimaging data from pediatric populations presents several challenges. There are normal variations in brain shape from infancy to adulthood and normal developmental changes related to tissue maturation. Measurement of cortical thickness is one important way to analyze such developmental tissue changes. We developed a novel framework that allows group-wise automatic mesh-based analysis of cortical thickness. Our approach is divided into four main parts. First an individual pre-processing pipeline is applied on each subject to create genus-zero inflated white matter cortical surfaces with cortical thickness measurements. The second part performs an entropy-based group-wise shape correspondence on these meshes using a particle system, which establishes a trade-off between an even sampling of the cortical surfaces and the similarity of corresponding points across the population using sulcal depth information and spatial proximity. A novel automatic initial particle sampling is performed using a matched 98-lobe parcellation map prior to a particle-splitting phase. Third, corresponding re-sampled surfaces are computed with interpolated cortical thickness measurements, which are finally analyzed via a statistical vertex-wise analysis module. This framework consists of a pipeline of automated 3D Slicer compatible modules. It has been tested on a small pediatric dataset and incorporated in an open-source C++ based high-level module called GAMBIT. GAMBIT's setup allows efficient batch processing, grid computing and quality control. The current research focuses on the use of an average template for correspondence and surface re-sampling, as well as thorough validation of the framework and its application to clinical pediatric studies.

  11. Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy

    NASA Astrophysics Data System (ADS)

    Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre

    An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.

  12. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  13. Automatic BSS-based filtering of metallic interference in MEG recordings: definition and validation using simulated signals

    NASA Astrophysics Data System (ADS)

    Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio

    2015-08-01

    Objective. One of the principal drawbacks of magnetoencephalography (MEG) is its high sensitivity to metallic artifacts, which come from implanted intracranial electrodes and dental ferromagnetic prosthesis and produce a high distortion that masks cerebral activity. The aim of this study was to develop an automatic algorithm based on blind source separation (BSS) techniques to remove metallic artifacts from MEG signals. Approach. Three methods were evaluated: AMUSE, a second-order technique; and INFOMAX and FastICA, both based on high-order statistics. Simulated signals consisting of real artifact-free data mixed with real metallic artifacts were generated to objectively evaluate the effectiveness of BSS and the subsequent interference reduction. A completely automatic detection of metallic-related components was proposed, exploiting the known characteristics of the metallic interference: regularity and low frequency content. Main results. The automatic procedure was applied to the simulated datasets and the three methods exhibited different performances. Results indicated that AMUSE preserved and consequently recovered more brain activity than INFOMAX and FastICA. Normalized mean squared error for AMUSE decomposition remained below 2%, allowing an effective removal of artifactual components. Significance. To date, the performance of automatic artifact reduction has not been evaluated in MEG recordings. The proposed methodology is based on an automatic algorithm that provides an effective interference removal. This approach can be applied to any MEG dataset affected by metallic artifacts as a processing step, allowing further analysis of unusable or poor quality data.

  14. Automatic Concept-Based Query Expansion Using Term Relational Pathways Built from a Collection-Specific Association Thesaurus

    ERIC Educational Resources Information Center

    Lyall-Wilson, Jennifer Rae

    2013-01-01

    The dissertation research explores an approach to automatic concept-based query expansion to improve search engine performance. It uses a network-based approach for identifying the concept represented by the user's query and is founded on the idea that a collection-specific association thesaurus can be used to create a reasonable representation of…

  15. A Web-Based Assessment for Phonological Awareness, Rapid Automatized Naming (RAN) and Learning to Read Chinese

    ERIC Educational Resources Information Center

    Liao, Chen-Huei; Kuo, Bor-Chen

    2011-01-01

    The present study examined the equivalency of conventional and web-based tests in reading Chinese. Phonological awareness, rapid automatized naming (RAN), reading accuracy, and reading fluency tests were administered to 93 grade 6 children in Taiwan with both test versions (paper-pencil and web-based). The results suggest that conventional and…

  16. Automatic and continuous landslide monitoring: the Rotolon Web-based platform

    NASA Astrophysics Data System (ADS)

    Frigerio, Simone; Schenato, Luca; Mantovani, Matteo; Bossi, Giulia; Marcato, Gianluca; Cavalli, Marco; Pasuto, Alessandro

    2013-04-01

    Mount Rotolon (Eastern Italian Alps) is affected by a complex landslide that, since 1985, is threatening the nearby village of Recoaro Terme. The first written proof of a landslide occurrence dated back to 1798. After the last re-activation on November 2010 (637 mm of intense rainfall recorded in the 12 days prior the event), a mass of approximately 320.000 m3 detached from the south flank of Mount Rotolon and evolved into a fast debris flow that ran for about 3 km along the stream bed. A real-time monitoring system was required to detect early indication of rapid movements, potentially saving lives and property. A web-based platform for automatic and continuous monitoring was designed as a first step in the implementation of an early-warning system. Measurements collected by the automated geotechnical and topographic instrumentation, deployed over the landslide body, are gathered in a central box station. After the calibration process, they are transmitted by web services on a local server, where graphs, maps, reports and alert announcement are automatically generated and updated. All the processed information are available by web browser with different access rights. The web environment provides the following advantages: 1) data is collected from different data sources and matched on a single server-side frame 2) a remote user-interface allows regular technical maintenance and direct access to the instruments 3) data management system is synchronized and automatically tested 4) a graphical user interface on browser provides a user-friendly tool for decision-makers to interact with a system continuously updated. On this site two monitoring systems are actually on course: 1) GB-InSAR radar interferometer (University of Florence - Department of Earth Science) and 2) Automated Total Station (ATS) combined with extensometers network in a Web-based solution (CNR-IRPI Padova). This work deals with details on methodology, services and techniques adopted for the second

  17. Automatic diagnosis of malaria based on complete circle-ellipse fitting search algorithm.

    PubMed

    Sheikhhosseini, M; Rabbani, H; Zekri, M; Talebi, A

    2013-12-01

    Diagnosis of malaria parasitemia from blood smears is a subjective and time-consuming task for pathologists. The automatic diagnostic process will reduce the diagnostic time. Also, it can be worked as a second opinion for pathologists and may be useful in malaria screening. This study presents an automatic method for malaria diagnosis from thin blood smears. According to this fact that malaria life cycle is started by forming a ring around the parasite nucleus, the proposed approach is mainly based on curve fitting to detect parasite ring in the blood smear. The method is composed of six main phases: stain object extraction step, which extracts candidate objects that may be infected by malaria parasites. This phase includes stained pixel extraction step based on intensity and colour, and stained object segmentation by defining stained circle matching. Second step is preprocessing phase which makes use of nonlinear diffusion filtering. The process continues with detection of parasite nucleus from resulted image of previous step according to image intensity. Fourth step introduces a complete search process in which the circle search step identifies the direction and initial points for direct least-square ellipse fitting algorithm. Furthermore in the ellipse searching process, although parasite shape is completed undesired regions with high error value are removed and ellipse parameters are modified. Features are extracted from the parasite candidate region instead of whole candidate object in the fifth step. By employing this special feature extraction way, which is provided by special searching process, the necessity of employing clump splitting methods is removed. Also, defining stained circle matching process in the first step speeds up the whole procedure. Finally, a series of decision rules are applied on the extracted features to decide on the positivity or negativity of malaria parasite presence. The algorithm is applied on 26 digital images which are provided

  18. Automatic Kappa Angle Estimation for Air Photos Based on Phase Only Correlation

    NASA Astrophysics Data System (ADS)

    Xiong, Z.; Stanley, D.; Xin, Y.

    2016-06-01

    The approximate value of exterior orientation parameters is needed for air photo bundle adjustment. Usually the air borne GPS/IMU can provide the initial value for the camera position and attitude angle. However, in some cases, the camera's attitude angle is not available due to lack of IMU or other reasons. In this case, the kappa angle needs to be estimated for each photo before bundle adjustment. The kappa angle can be obtained from the Ground Control Points (GCPs) in the photo. Unfortunately it is not the case that enough GCPs are always available. In order to overcome this problem, an algorithm is developed to automatically estimate the kappa angle for air photos based on phase only correlation technique. This function has been embedded in PCI software. Extensive experiments show that this algorithm is fast, reliable, and stable.

  19. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  20. Automatic localization of pupil using eccentricity and iris using gradient based method

    NASA Astrophysics Data System (ADS)

    Khan, Tariq M.; Aurangzeb Khan, M.; Malik, Shahzad A.; Khan, Shahid A.; Bashir, Tariq; Dar, Amir H.

    2011-02-01

    This paper presents a novel approach for the automatic localization of pupil and iris. Pupil and iris are nearly circular regions, which are surrounded by sclera, eyelids and eyelashes. The localization of both pupil and iris is extremely important in any iris recognition system. In the proposed algorithm pupil is localized using Eccentricity based Bisection method which looks for the region that has the highest probability of having pupil. While iris localization is carried out in two steps. In the first step, iris image is directionally segmented and a noise free region (region of interest) is extracted. In the second step, angular lines in the region of interest are extracted and the edge points of iris outer boundary are found through the gradient of these lines. The proposed method is tested on CASIA ver 1.0 and MMU Iris databases. Experimental results show that this method is comparatively accurate.

  1. Automatic Carbon Dioxide-Methane Gas Sensor Based on the Solubility of Gases in Water

    PubMed Central

    Cadena-Pereda, Raúl O.; Rivera-Muñoz, Eric M.; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J.; Anaya-Rivera, Ely K.

    2012-01-01

    Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0–100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible. PMID:23112626

  2. Automatic ICD-10 coding algorithm using an improved longest common subsequence based on semantic similarity

    PubMed Central

    Lu, HuiJuan; Li, LanJuan

    2017-01-01

    ICD-10(International Classification of Diseases 10th revision) is a classification of a disease, symptom, procedure, or injury. Diseases are often described in patients’ medical records with free texts, such as terms, phrases and paraphrases, which differ significantly from those used in ICD-10 classification. This paper presents an improved approach based on the Longest Common Subsequence (LCS) and semantic similarity for automatic Chinese diagnoses, mapping from the disease names given by clinician to the disease names in ICD-10. LCS refers to the longest string that is a subsequence of every member of a given set of strings. The proposed method of improved LCS in this paper can increase the accuracy of processing in Chinese disease mapping. PMID:28306739

  3. Automatic carbon dioxide-methane gas sensor based on the solubility of gases in water.

    PubMed

    Cadena-Pereda, Raúl O; Rivera-Muñoz, Eric M; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J; Anaya-Rivera, Ely K

    2012-01-01

    Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0-100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible.

  4. A semi-automatic web based tool for the selection of research projects reviewers.

    PubMed

    Pupella, Valeria; Monteverde, Maria Eugenia; Lombardo, Claudio; Belardelli, Filippo; Giacomini, Mauro

    2014-01-01

    The correct evaluation of research proposals continues today to be problematic, and in many cases, grants and fellowships are subjected to this type of assessment. A web based semi-automatic tool to help in the selection of reviewers was developed. The core of the proposed system is the matching of the MeSH Descriptors of the publications submitted by the reviewers (for their accreditation) and the Descriptor linked to the research keywords, which were selected. Moreover, a citation related index was further calculated and adopted in order to discard not suitable reviewers. This tool was used as a support in a web site for the evaluation of candidates applying for a fellowship in the oncology field.

  5. Automatic camera-based identification and 3-D reconstruction of electrode positions in electrocardiographic imaging.

    PubMed

    Schulze, Walther H W; Mackens, Patrick; Potyagaylo, Danila; Rhode, Kawal; Tülümen, Erol; Schimpf, Rainer; Papavassiliu, Theano; Borggrefe, Martin; Dössel, Olaf

    2014-12-01

    Electrocardiographic imaging (ECG imaging) is a method to depict electrophysiological processes in the heart. It is an emerging technology with the potential of making the therapy of cardiac arrhythmia less invasive, less expensive, and more precise. A major challenge for integrating the method into clinical workflow is the seamless and correct identification and localization of electrodes on the thorax and their assignment to recorded channels. This work proposes a camera-based system, which can localize all electrode positions at once and to an accuracy of approximately 1 ± 1 mm. A system for automatic identification of individual electrodes is implemented that overcomes the need of manual annotation. For this purpose, a system of markers is suggested, which facilitates a precise localization to subpixel accuracy and robust identification using an error-correcting code. The accuracy of the presented system in identifying and localizing electrodes is validated in a phantom study. Its overall capability is demonstrated in a clinical scenario.

  6. Knowledge-based automatic recognition technology of radome from infrared images

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-jian; Ma, Ling; Fang, Xiao; Chen, Lei; Lu, Hong-bin

    2009-07-01

    In this paper, a kind of knowledge-based automatic target recognition (ATR) technology of radome from infrared image is studied. The circular imaging of radome is used as the characteristic distinguished from background to realize target recognition. For the characteristic of low contrast of infrared image, brightness transformation is used to preliminarily enhance the contrast of the original image. In the light of the fact that target background outline statistically takes on vertical and horizontal directivity, a kind of revised Sobel operator with direction of 45°and 135°is adopted to detect edge feature so that background noise is effectively suppressed. To reduce the error ratio of target recognition from single frame image, the method to inspect the relativity of target recognition results of successive frames is adopted. The performance of the algorithm is tested using actually taken infrared radome images, and the right recognition ratio is around 90%, which turns out that this technology is feasible.

  7. An automatic sleep spindle detector based on wavelets and the teager energy operator.

    PubMed

    Ahmed, Beena; Redissi, Amira; Tafreshi, Reza

    2009-01-01

    Sleep spindles are one of the most important short-lasting rhythmic events occurring in the EEG during Non-Rapid Eye Movement sleep. Their accurate identification in a polysomnographic signal is essential for sleep professionals to help them mark Stage 2 sleep. Visual spindle scoring however is a tedious workload, as there are often a thousand spindles in an all-night recording. In this paper a novel approach for the automatic detection of sleep spindles based upon the Teager Energy Operator and wavelet packets has been presented. The Teager operator was found to accurately enhance periodic activity in epochs of the EEG containing spindles. The wavelet packet transform proved effective in accurately locating spindles in the time-frequency domain. The autocorrelation function of the resultant Teager signal and the wavelet packet energy ratio were used to identify epochs with spindles. These two features were integrated into a spindle detection algorithm which achieved an accuracy of 93.7%.

  8. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    NASA Astrophysics Data System (ADS)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  9. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  10. Two light attenuation models for automatic diameter measurement of the blood vessels.

    PubMed

    Michoud, E; Carpentier, P; Franco, A; Intaglietta, M

    1993-04-01

    The Lambert-Beer's law of the absorption of the light by blood in a vessel is used to model the light attenuation by a blood vessel that is transilluminated. Two models are used for an automatic vessel diameter determination for intravital microscopy. Some requirements for the photometric system have to be met in order to reduce errors due to light scattering. In these conditions, a videodensitometric pattern of the cross-section of the vessel can be fitted by the different models in order to obtain the diameter of the vessel. The first model proposed uses a uniformly distributed red blood cell column. A non-linear estimation of the diameter is done with the Levenberg-Marquardt method in 2 sec, using a regular PC386 microcomputer. The second one takes in account the presence of a plasma layer and computes the diameter of the red blood cell column and the diameter of the vessel in one minute. These models can be used for pharmacological studies or for a better understanding of the formation of a transilluminated intravital image. They can also be used for angiographic images.

  11. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    NASA Astrophysics Data System (ADS)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  12. Automatic Identification of Web-Based Risk Markers for Health Events

    PubMed Central

    Borsa, Diana; Hayward, Andrew C; McKendry, Rachel A; Cox, Ingemar J

    2015-01-01

    Background The escalating cost of global health care is driving the development of new technologies to identify early indicators of an individual’s risk of disease. Traditionally, epidemiologists have identified such risk factors using medical databases and lengthy clinical studies but these are often limited in size and cost and can fail to take full account of diseases where there are social stigmas or to identify transient acute risk factors. Objective Here we report that Web search engine queries coupled with information on Wikipedia access patterns can be used to infer health events associated with an individual user and automatically generate Web-based risk markers for some of the common medical conditions worldwide, from cardiovascular disease to sexually transmitted infections and mental health conditions, as well as pregnancy. Methods Using anonymized datasets, we present methods to first distinguish individuals likely to have experienced specific health events, and classify them into distinct categories. We then use the self-controlled case series method to find the incidence of health events in risk periods directly following a user’s search for a query category, and compare to the incidence during other periods for the same individuals. Results Searches for pet stores were risk markers for allergy. We also identified some possible new risk markers; for example: searching for fast food and theme restaurants was associated with a transient increase in risk of myocardial infarction, suggesting this exposure goes beyond a long-term risk factor but may also act as an acute trigger of myocardial infarction. Dating and adult content websites were risk markers for sexually transmitted infections, such as human immunodeficiency virus (HIV). Conclusions Web-based methods provide a powerful, low-cost approach to automatically identify risk factors, and support more timely and personalized public health efforts to bring human and economic benefits. PMID

  13. Evaluating the effectiveness of treatment of corneal ulcers via computer-based automatic image analysis

    NASA Astrophysics Data System (ADS)

    Otoum, Nesreen A.; Edirisinghe, Eran A.; Dua, Harminder; Faraj, Lana

    2012-06-01

    Corneal Ulcers are a common eye disease that requires prompt treatment. Recently a number of treatment approaches have been introduced that have been proven to be very effective. Unfortunately, the monitoring process of the treatment procedure remains manual and hence time consuming and prone to human errors. In this research we propose an automatic image analysis based approach to measure the size of an ulcer and its subsequent further investigation to determine the effectiveness of any treatment process followed. In Ophthalmology an ulcer area is detected for further inspection via luminous excitation of a dye. Usually in the imaging systems utilised for this purpose (i.e. a slit lamp with an appropriate dye) the ulcer area is excited to be luminous green in colour as compared to rest of the cornea which appears blue/brown. In the proposed approach we analyse the image in the HVS colour space. Initially a pre-processing stage that carries out a local histogram equalisation is used to bring back detail in any over or under exposed areas. Secondly we deal with the removal of potential reflections from the affected areas by making use of image registration of two candidate corneal images based on the detected corneal areas. Thirdly the exact corneal boundary is detected by initially registering an ellipse to the candidate corneal boundary detected via edge detection and subsequently allowing the user to modify the boundary to overlap with the boundary of the ulcer being observed. Although this step makes the approach semi automatic, it removes the impact of breakages of the corneal boundary due to occlusion, noise, image quality degradations. The ratio between the ulcer area confined within the corneal area to the corneal area is used as a measure of comparison. We demonstrate the use of the proposed tool in the analysis of the effectiveness of a treatment procedure adopted for corneal ulcers in patients by comparing the variation of corneal size over time.

  14. Automatic detecting method of LED signal lamps on fascia based on color image

    NASA Astrophysics Data System (ADS)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  15. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  16. Volume-based Feature Analysis of Mucosa for Automatic Initial Polyp Detection in Virtual Colonoscopy

    PubMed Central

    Wang, Su; Zhu, Hongbin; Lu, Hongbing; Liang, Zhengrong

    2009-01-01

    In this paper, we present a volume-based mucosa-based polyp candidate determination scheme for automatic polyp detection in computed colonography. Different from most of the existing computer-aided detection (CAD) methods where mucosa layer is a one-layer surface, a thick mucosa of 3-5 voxels wide fully reflecting partial volume effect is intentionally extracted, which excludes the direct applications of the traditional geometrical features. In order to address this dilemma, fast marching-based adaptive gradient/curvature and weighted integral curvature along normal directions (WICND) are developed for volume-based mucosa. In doing so, polyp candidates are optimally determined by computing and clustering these fast marching-based adaptive geometrical features. By testing on 52 patients datasets in which 26 patients were found with polyps of size 4-22 mm, both the locations and number of polyp candidates detected by WICND and previously developed linear integral curvature (LIC) were compared. The results were promising that WICND outperformed LIC mainly in two aspects: (1) the number of detected false positives was reduced from 706 to 132 on average, which significantly released our burden of machine learning in the feature space, and (2) both the sensitivity and accuracy of polyp detection have been slightly improved, especially for those polyps smaller than 5mm. PMID:19774204

  17. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  18. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  19. Fully automatic segmentation of the mitral leaflets in 3D transesophageal echocardiographic images using multi-atlas joint label fusion and deformable medial modeling.

    PubMed

    Pouch, A M; Wang, H; Takabe, M; Jackson, B M; Gorman, J H; Gorman, R C; Yushkevich, P A; Sehgal, C M

    2014-01-01

    Comprehensive visual and quantitative analysis of in vivo human mitral valve morphology is central to the diagnosis and surgical treatment of mitral valve disease. Real-time 3D transesophageal echocardiography (3D TEE) is a practical, highly informative imaging modality for examining the mitral valve in a clinical setting. To facilitate visual and quantitative 3D TEE image analysis, we describe a fully automated method for segmenting the mitral leaflets in 3D TEE image data. The algorithm integrates complementary probabilistic segmentation and shape modeling techniques (multi-atlas joint label fusion and deformable modeling with continuous medial representation) to automatically generate 3D geometric models of the mitral leaflets from 3D TEE image data. These models are unique in that they establish a shape-based coordinate system on the valves of different subjects and represent the leaflets volumetrically, as structures with locally varying thickness. In this work, expert image analysis is the gold standard for evaluating automatic segmentation. Without any user interaction, we demonstrate that the automatic segmentation method accurately captures patient-specific leaflet geometry at both systole and diastole in 3D TEE data acquired from a mixed population of subjects with normal valve morphology and mitral valve disease.

  20. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging

  1. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging

  2. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  3. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  4. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.

  5. Automatic analyzer for highly polar carboxylic acids based on fluorescence derivatization-liquid chromatography.

    PubMed

    Todoroki, Kenichiro; Nakano, Tatsuki; Ishii, Yasuhiro; Goto, Kanoko; Tomita, Ryoko; Fujioka, Toshihiro; Min, Jun Zhe; Inoue, Koichi; Toyo'oka, Toshimasa

    2015-03-01

    A sensitive, versatile, and reproducible automatic analyzer for highly polar carboxylic acids based on a fluorescence derivatization-liquid chromatography (LC) method was developed. In this method, carboxylic acids were automatically and fluorescently derivatized with 4-(N,N-dimethylaminosulfonyl)-7-piperazino-2,1,3-benzoxadiazole (DBD-PZ) in the presence of 4-(4,6-dimethoxy-1,3,5-triazin-2-yl)-4-methylmorpholinium chloride by adopting a pretreatment program installed in an LC autosampler. All of the DBD-PZ-carboxylic acid derivatives were separated on the ODS column within 30 min by gradient elution. The peak of DBD-PZ did not interfere with the separation and the quantification of all the acids with the exception of lactic acid. From the LC-MS/MS analysis, we confirmed that lactic acid was converted to an oxytriazinyl derivative, which was further modified with a dimethoxy triazine group of 4-(4,6-dimethoxy-1,3,5-triazin-2-yl)-4-methylmorpholinium chloride (DMT-MM). We detected this oxytriazinyl derivative to quantify lactic acid. The detection limits (signal-to-noise ratio = 3) for the examined acids ranged from 0.19 to 1.1 µm, which correspond to 95-550 fmol per injection. The intra- and inter-day precisions of typical, highly polar carboxylic acids were all <9.0%. The developed method was successfully applied to the comprehensive analysis of carboxylic acids in various samples, which included fruit juices, red wine and media from cultured tumor cells.

  6. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    PubMed

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT.

  7. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    NASA Astrophysics Data System (ADS)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  8. Text Mining and Natural Language Processing Approaches for Automatic Categorization of Lay Requests to Web-Based Expert Forums

    PubMed Central

    Reincke, Ulrich; Michelmann, Hans Wilhelm

    2009-01-01

    Background Both healthy and sick people increasingly use electronic media to obtain medical information and advice. For example, Internet users may send requests to Web-based expert forums, or so-called “ask the doctor” services. Objective To automatically classify lay requests to an Internet medical expert forum using a combination of different text-mining strategies. Methods We first manually classified a sample of 988 requests directed to a involuntary childlessness forum on the German website “Rund ums Baby” (“Everything about Babies”) into one or more of 38 categories belonging to two dimensions (“subject matter” and “expectations”). After creating start and synonym lists, we calculated the average Cramer’s V statistic for the association of each word with each category. We also used principle component analysis and singular value decomposition as further text-mining strategies. With these measures we trained regression models and determined, on the basis of best regression models, for any request the probability of belonging to each of the 38 different categories, with a cutoff of 50%. Recall and precision of a test sample were calculated as a measure of quality for the automatic classification. Results According to the manual classification of 988 documents, 102 (10%) documents fell into the category “in vitro fertilization (IVF),” 81 (8%) into the category “ovulation,” 79 (8%) into “cycle,” and 57 (6%) into “semen analysis.” These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as “general information” and 351 (36%) as a wish for “treatment recommendations.” The generation of indicator variables based on the chi-square analysis and Cramer’s V proved to be the best approach for automatic classification in about half of the categories. In combination with the two other

  9. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica

    2013-05-01

    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.

  10. A computer program to automatically generate state equations and macro-models. [for network analysis and design

    NASA Technical Reports Server (NTRS)

    Garrett, S. J.; Bowers, J. C.; Oreilly, J. E., Jr.

    1978-01-01

    A computer program, PROSE, that produces nonlinear state equations from a simple topological description of an electrical or mechanical network is described. Unnecessary states are also automatically eliminated, so that a simplified terminal circuit model is obtained. The program also prints out the eigenvalues of a linearized system and the sensitivities of the eigenvalue of largest magnitude.

  11. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  12. Ontology-based automatic identification of public health-related Turkish tweets.

    PubMed

    Küçük, Emine Ela; Yapar, Kürşad; Küçük, Dilek; Küçük, Doğan

    2017-04-01

    Social media analysis, such as the analysis of tweets, is a promising research topic for tracking public health concerns including epidemics. In this paper, we present an ontology-based approach to automatically identify public health-related Turkish tweets. The system is based on a public health ontology that we have constructed through a semi-automated procedure. The ontology concepts are expanded through a linguistically motivated relaxation scheme as the last stage of ontology development, before being integrated into our system to increase its coverage. The ultimate lexical resource which includes the terms corresponding to the ontology concepts is used to filter the Twitter stream so that a plausible tweet subset, including mostly public-health related tweets, can be obtained. Experiments are carried out on two million genuine tweets and promising precision rates are obtained. Also implemented within the course of the current study is a Web-based interface, to track the results of this identification system, to be used by the related public health staff. Hence, the current social media analysis study has both technical and practical contributions to the significant domain of public health.

  13. Smooth pursuit detection in binocular eye-tracking data with automatic video-based performance evaluation.

    PubMed

    Larsson, Linnéa; Nyström, Marcus; Ardö, Håkan; Åström, Kalle; Stridh, Martin

    2016-12-01

    An increasing number of researchers record binocular eye-tracking signals from participants viewing moving stimuli, but the majority of event-detection algorithms are monocular and do not consider smooth pursuit movements. The purposes of the present study are to develop an algorithm that discriminates between fixations and smooth pursuit movements in binocular eye-tracking signals and to evaluate its performance using an automated video-based strategy. The proposed algorithm uses a clustering approach that takes both spatial and temporal aspects of the binocular eye-tracking signal into account, and is evaluated using a novel video-based evaluation strategy based on automatically detected moving objects in the video stimuli. The binocular algorithm detects 98% of fixations in image stimuli compared to 95% when only one eye is used, while for video stimuli, both the binocular and monocular algorithms detect around 40% of smooth pursuit movements. The present article shows that using binocular information for discrimination of fixations and smooth pursuit movements is advantageous in static stimuli, without impairing the algorithm's ability to detect smooth pursuit movements in video and moving-dot stimuli. With an automated evaluation strategy, time-consuming manual annotations are avoided and a larger amount of data can be used in the evaluation process.

  14. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  15. An adaptive filter-based method for robust, automatic detection and frequency estimation of whistles.

    PubMed

    Johansson, A Torbjorn; White, Paul R

    2011-08-01

    This paper proposes an adaptive filter-based method for detection and frequency estimation of whistle calls, such as the calls of birds and marine mammals, which are typically analyzed in the time-frequency domain using a spectrogram. The approach taken here is based on adaptive notch filtering, which is an established technique for frequency tracking. For application to automatic whistle processing, methods for detection and improved frequency tracking through frequency crossings as well as interfering transients are developed and coupled to the frequency tracker. Background noise estimation and compensation is accomplished using order statistics and pre-whitening. Using simulated signals as well as recorded calls of marine mammals and a human whistled speech utterance, it is shown that the proposed method can detect more simultaneous whistles than two competing spectrogram-based methods while not reporting any false alarms on the example datasets. In one example, it extracts complete 1.4 and 1.8 s bottlenose dolphin whistles successfully through frequency crossings. The method performs detection and estimates frequency tracks even at high sweep rates. The algorithm is also shown to be effective on human whistled utterances.

  16. An automatic falling drop system based on multicommutation process for photometric chlorine determination in bleach.

    PubMed

    da Silva Borges, Sivanildo; Reis, Boaventura F

    2007-09-26

    In this work an automatic photometric procedure for the determination of chlorine in bleach samples employing N,N'-diethyl-p-phenylenediamine (DPD) as chromogenic reagent is described. The procedure was based on a falling drop system where the analyte (Cl(2)) was collected by a DPD solution drop (50 microL) after its delivery from the sample bulk that was previously acidified. The flow system was designed based on the multicommutation process assembling a set of three-way solenoid valves, which under microcomputer control afforded facilities to handle sample and reagent solution in order to control analyte delivering and solution drop generation. The analyte volatilization was improved by coupling online a little heating device. The detection system comprised a green LED (515 nm) and a phototransistor. Aiming to prove the usefulness of the proposed procedure a set of bleach samples was analyzed. Comparing the results with those obtained with reference method no significant difference at 95% confidence level was observed. Other profitable features such as a linear response ranging from 15 up to 100 mgL(-1) Cl(2) (R=0.999); a detection limit of 4.5 mgL(-1) Cl(2) estimated based on the 3 sigma criterion; a relative standard deviation of 2.5% (n=10) using a typical bleach sample containing 25.0 mgL(-1) Cl(2); a consumption of 55 microg of DPD per determination; and a analytical frequency of 20 determinations per hour were also achieved.

  17. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    PubMed

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  18. Automatic Single Tree Detection in Plantations using UAV-based Photogrammetric Point clouds

    NASA Astrophysics Data System (ADS)

    Kattenborn, T.; Sperlich, M.; Bataua, K.; Koch, B.

    2014-08-01

    For reasons of documentation, management and certification there is a high interest in efficient inventories of palm plantations on the single plant level. Recent developments in unmanned aerial vehicle (UAV) technology facilitate spatial and temporal flexible acquisition of high resolution 3D data. Common single tree detection approaches are based on Very High Resolution (VHR) satellite or Airborne Laser Scanning (ALS) data. However, VHR data is often limited to clouds and does commonly not allow for height measurements. VHR and in particualar ALS data are characterized by high relatively high acquisition costs. Sperlich et al. (2013) already demonstrated the high potential of UAV-based photogrammetric point clouds for single tree detection using pouring algorithms. This approach was adjusted and improved for an application on palm plantation. The 9.4ha test site on Tarawa, Kiribati, comprised densely scattered growing palms, as well as abundant undergrowth and trees. Using a standard consumer grade camera mounted on an octocopter two flight campaigns at 70m and 100m altitude were performed to evaluate the effect Ground Sampling Distance (GSD) and image overlap. To avoid comission errors and improve the terrain interpolation the point clouds were classified based on the geometric characteristics of the classes, i.e. (1) palm, (2) other vegetation (3) and ground. The mapping accuracy amounts for 86.1 % for the entire study area and 98.2 % for dense growing palm stands. We conclude that this flexible and automatic approach has high capabilities for operational use.

  19. Piloted Simulation Evaluation of a Model-Predictive Automatic Recovery System to Prevent Vehicle Loss of Control on Approach

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Liu, Yuan; Sowers, Thomas S.; Owen, A. Karl; Guo, Ten-Huei

    2014-01-01

    This paper describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  20. First clinical experience with the automatic positioning system and Leksell gamma knife Model C. Technical note.

    PubMed

    Horstmann, G A; Schöpgens, H; van Eck, A T; Kreiner, H J; Herz, W

    2000-12-01

    In May of 1999, the first Leksell Model C gamma knife was installed at the Gamma Knife Zentrum in Krefeld, Germany. The authors recount their experience with this latest technical gamma knife development. Until the end of 1999, extensive physical and technical tests were performed and the system's hardware and software were continuously improved and adapted to the user's needs. By the end of 1999, 163 GKSs had been performed using the new functionality of the Model C in manual or "trunnion" mode. The trunnions, the two parts of the system that fix the patient headframe to the gamma knife when the isocenter positions, are checked manually. During the same period the new automatic positioning system (APS) was extensively tested and refined so that the first APS treatment could be performed in January 2000. Fifty GKSs have been performed with the APS capability of the Model C. It was possible to use APS alone in 74% of surgeries whereas in 14% some shots were given with APS and some with trunnions. In 12%, GKS was scheduled and planned for APS, but due to unexpected technical (6%) or mechanical (6%) reasons the treatment had to be performed manually. At present there are some spatial restrictions with Model C in APS mode when compared with the Model B. The most significant restriction is the narrow space for the patient's shoulders, especially when deep-seated lesions are treated. Through mechanical changes of the APS motor housing and some modifications of and to the motor driven couch adjustment, these limitations will be reduced in the future. The APS treatment runs smoothly and fast. In no case did any relevant safety error occur during GKS. The more stringent mechanical limitations of the APS compared with the Model B means that frame placement on the head is more critical than before.

  1. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  2. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  3. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    PubMed

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  4. UFC advisor: An AI-based system for the automatic test environment

    NASA Technical Reports Server (NTRS)

    Lincoln, David T.; Fink, Pamela K.

    1990-01-01

    The Air Logistics Command within the Air Force is responsible for maintaining a wide variety of aircraft fleets and weapon systems. To maintain these fleets and systems requires specialized test equipment that provides data concerning the behavior of a particular device. The test equipment is used to 'poke and prod' the device to determine its functionality. The data represent voltages, pressures, torques, temperatures, etc. and are called testpoints. These testpoints can be defined numerically as being in or out of limits/tolerance. Some test equipment is termed 'automatic' because it is computer-controlled. Due to the fact that effective maintenance in the test arena requires a significant amount of expertise, it is an ideal area for the application of knowledge-based system technology. Such a system would take testpoint data, identify values out-of-limits, and determine potential underlying problems based on what is out-of-limits and how far. This paper discusses the application of this technology to a device called the Unified Fuel Control (UFC) which is maintained in this manner.

  5. Adaptive automatic data analysis in full-field fringe-pattern-based optical metrology

    NASA Astrophysics Data System (ADS)

    Trusiak, Maciej; Patorski, Krzysztof; Sluzewski, Lukasz; Pokorski, Krzysztof; Sunderland, Zofia

    2016-12-01

    Fringe pattern processing and analysis is an important task of full-field optical measurement techniques like interferometry, digital holography, structural illumination and moiré. In this contribution we present several adaptive automatic data analysis solutions based on the notion of Hilbert-Huang transform for measurand retrieval via fringe pattern phase and amplitude demodulation. The Hilbert-Huang transform consists of 2D empirical mode decomposition algorithm and Hilbert spiral transform analysis. Empirical mode decomposition adaptively dissects a meaningful number of same-scale subimages from the analyzed pattern - it is a data-driven method. Appropriately managing this set of unique subimages results in a very powerful fringe pre-filtering tool. Phase/amplitude demodulation is performed using Hilbert spiral transform aided by the local fringe orientation estimator. We describe several optical measurement techniques for technical and biological objects characterization basing on the especially tailored Hilbert-Huang algorithm modifications for fringe pattern denoising, detrending and amplitude/phase demodulation.

  6. Automatic Mapping of Glacier Based on SAR Imagery by Benefits of Freely Optical and Thermal Data

    NASA Astrophysics Data System (ADS)

    Fang, L.; Hoegner, L.; Stilla, U.

    2015-03-01

    For many research applications like water resources evaluation, determination of glacier specific changes, and for calculation of the past and future contribution of glaciers to sea-level change, parameters about the size and spatial distribution of glaciers is crucial. In this paper, an automatic method for determination of glacier surface area using single track high resolution TerraSAR-X imagery by benefits of low resolution optical and thermal data is presented. Based on the normalized difference snow index (NDSI) and land surface temperature (LST) map generated from optical and thermal data combined with a surface slope data, a low resolution binary mask was derived used for the supervised classification of glacier using SAR imagery. Then, a set of suitable features is derived from the SAR intensity image, such as the texture information generated based on the gray level co-occurrence matrix (GLCM), and the intensity values. With these features, the glacier surface is discriminated from the background by Random Forests (RF) method.

  7. Automatic FDG-PET-based tumor and metastatic lymph node segmentation in cervical cancer

    NASA Astrophysics Data System (ADS)

    Arbonès, Dídac R.; Jensen, Henrik G.; Loft, Annika; Munck af Rosenschöld, Per; Hansen, Anders Elias; Igel, Christian; Darkner, Sune

    2014-03-01

    Treatment of cervical cancer, one of the three most commonly diagnosed cancers worldwide, often relies on delineations of the tumour and metastases based on PET imaging using the contrast agent 18F-Fluorodeoxyglucose (FDG). We present a robust automatic algorithm for segmenting the gross tumour volume (GTV) and metastatic lymph nodes in such images. As the cervix is located next to the bladder and FDG is washed out through the urine, the PET-positive GTV and the bladder cannot be easily separated. Our processing pipeline starts with a histogram-based region of interest detection followed by level set segmentation. After that, morphological image operations combined with clustering, region growing, and nearest neighbour labelling allow to remove the bladder and to identify the tumour and metastatic lymph nodes. The proposed method was applied to 125 patients and no failure could be detected by visual inspection. We compared our segmentations with results from manual delineations of corresponding MR and CT images, showing that the detected GTV lays at least 97.5% within the MR/CT delineations. We conclude that the algorithm has a very high potential for substituting the tedious manual delineation of PET positive areas.

  8. An Automatic Labeling of K-means Clusters based on Chi-Square Value

    NASA Astrophysics Data System (ADS)

    Kusumaningrum, R.; Farikhin

    2017-01-01

    Automatic labeling methods in text clustering are widely implemented. However, there are limited studies in automatic cluster labeling for numeric data points. Therefore, the aim of this study is to develop a novel automatic cluster labeling of numeric data points that utilize analysis of Chi-Square test as its cluster label. We performed K-means clustering as a clustering method and disparity of Health Human Resources as a case study. The result shows that the accuracy of cluster labeling is about 89.14%.

  9. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    SciTech Connect

    Etmektzoglou, A; Mishra, P; Svatos, M

    2015-06-15

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomes available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly

  10. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    SciTech Connect

    Ciller, Carlos; De Zanet, Sandro I.; Rüegsegger, Michael B.; Pica, Alessia; Sznitman, Raphael; Thiran, Jean-Philippe; Maeder, Philippe; Munier, Francis L.; Kowal, Jens H.; and others

    2015-07-15

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.

  11. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  12. Automatic graph-cut based segmentation of bones from knee magnetic resonance images for osteoarthritis research

    PubMed Central

    Prescott, Jeff W.; Gurcan, Metin N.

    2011-01-01

    In this paper, a new, fully automated, content-based system is proposed for knee bone segmentation from magnetic resonance images (MRI). The purpose of the bone segmentation is to support the discovery and characterization of imaging biomarkers for the incidence and progression of osteoarthritis, a debilitating joint disease, which affects a large portion of the aging population. The segmentation algorithm includes a novel content-based, two-pass disjoint block discovery mechanism, which is designed to support automation, segmentation initialization, and post processing. The block discovery is achieved by classifying the image content to bone and background blocks according to their similarity to the categories in the training data collected from typical bone structures. The classified blocks are then used to design an efficient graph-cut based segmentation algorithm. This algorithm requires constructing a graph using image pixel data followed by applying a maximum-flow algorithm which generates a minimum graph-cut that corresponds to an initial image segmentation. Content-based refinements and morphological operations are then applied to obtain the final segmentation. The proposed segmentation technique does not require any user interaction and can distinguish between bone and highly similar adjacent structures, such as fat tissues with high accuracy. The performance of the proposed system is evaluated by testing it on 376 MR images from the Osteoarthritis Initiative (OAI) database. This database included a selection of single images containing the femur and tibia from 200 subjects with varying levels of osteoarthritis severity. Additionally, a full three-dimensional segmentation of the bones from ten subjects with 14 slices each, and synthetic images with background having intensity and spatial characteristics similar to those of bone are used to assess the robustness and consistency of the developed algorithm. The results show an automatic bone detection rate of

  13. Spatial correlation based artifact detection for automatic seizure detection in EEG.

    PubMed

    Skupch, Ana M; Dollfuß, Peter; Fürbaß, Franz; Gritsch, Gerhard; Hartmann, Manfred M; Perko, Hannes; Pataraia, Ekaterina; Lindinger, Gerald; Kluge, Tilmann

    2013-01-01

    Automatic EEG-processing systems such as seizure detection systems are more and more in use to cope with the large amount of data that arises from long-term EEG-monitorings. Since artifacts occur very often during the recordings and disturb the EEG-processing, it is crucial for these systems to have a good automatic artifact detection. We present a novel, computationally inexpensive automatic artifact detection system that uses the spatial distribution of the EEG-signal and the location of the electrodes to detect artifacts on electrodes. The algorithm was evaluated by including it into the automatic seizure detection system EpiScan and applying it to a very large amount of data including a large variety of EEGs and artifacts.

  14. Development of a microcontroller-based automatic control system for the electrohydraulic total artificial heart.

    PubMed

    Kim, H C; Khanwilkar, P S; Bearnson, G B; Olsen, D B

    1997-01-01

    An automatic physiological control system for the actively filled, alternately pumped ventricles of the volumetrically coupled, electrohydraulic total artificial heart (EHTAH) was developed for long-term use. The automatic control system must ensure that the device: 1) maintains a physiological response of cardiac output, 2) compensates for an nonphysiological condition, and 3) is stable, reliable, and operates at a high power efficiency. The developed automatic control system met these requirements both in vitro, in week-long continuous mock circulation tests, and in vivo, in acute open-chested animals (calves). Satisfactory results were also obtained in a series of chronic animal experiments, including 21 days of continuous operation of the fully automatic control mode, and 138 days of operation in a manual mode, in a 159-day calf implant.

  15. Mobile large scale 3D coordinate measuring system based on network of rotating laser automatic theodolites

    NASA Astrophysics Data System (ADS)

    Liu, Zhigang; Liu, Zhongzheng; Wu, Jianwei; Xu, Yaozhong

    2010-08-01

    This paper presents a mobile 3D coordinate measuring system for large scale metrology. This system is composed of a network of rotating laser automatic theodolites (N-RLATs) and a portable touch probe. In the N-RLAT system, each RLAT consists of two laser fans which rotate about its own Z axis at a constant speed and scan the whole metrology space. The optical sensors mounted on the portable touch probe receive the sweeping laser fans and generate the corresponding pulse signals, which establish a relationship between rotating angle of laser fan and time, and then the space angle measurement is converted into the corresponding peak time precision measurement of pulse signal. The rotating laser fans are modeled mathematically as a time varying parametrical vector in its local framework. A two steps on-site calibration method for solving the parameters of each RLAT and coordinate transformation among the N-RLATs. The portable probe is composed of optical sensors array with specified geometrical features and a touch point, on which the coordinates of optical sensors is determined by the N-RLATs and the touch point is estimated by solving a non-linear system. A prototype mobile 3D coordinate measuring system is developed and experiment results show its validity.

  16. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    PubMed

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses.

  17. SGML-based construction and automatic organization of comprehensive medical textbook on the Internet.

    PubMed

    Miyo, K; Ohe, K

    1998-01-01

    The amount of knowledge required in practical medicine is large and ever increasing. Medical staff must select and use appropriate pieces of the knowledge from this flood of medical information. Recent Internet technology may be solving these problems because it makes information open to the public immediately after it is created and enables many people to share it. Medical resources on the Internet are however currently not always well organized, because these are often voluntarily provided by the experts of a particular field. We therefore decided to create a comprehensive medical database on the Internet, which is well organized, and of a high quality for practical medical use. In order to make full use of the benefits provided by electronic media, we created a new structured data set of information. We then commissioned authors to write manuscripts from which we created Standard General Mark-up Language (SGML) documents. We then wrote a translation program that took the SGML and automatically created a fully inter-linked HyperText Mark-up Language (HTML) document. The translation program generated 4,814 HTML files created from 1,373 number of SGML documents. The total data size including pictures was about 640 MB. 205,775 related links were created. We then published our electronic medical textbook described in HTML publicly on the Internet. Using SGML-based structured data, we constructed a complex electronic medical textbook created organically from simple SGML instances. Our electronic medical textbook is systematic and comprehensive, and has a homogeneous structure. We believe that this is the first comprehensive medical textbook available on the Internet. Furthermore, it was found that our approach to the electronic medical textbook has two major advantages. One is automatic generation of inter-links among documents, and another is easy to maintain documents. In addition, once we construct the electronic textbase in SGML format, the data can be utilized to

  18. An ECG-based Algorithm for the Automatic Identification of Autonomic Activations Associated with Cortical Arousal

    PubMed Central

    Basner, Mathias; Griefahn, Barbara; Müller, Uwe; Plath, Gernot; Samel, Alexander

    2007-01-01

    supplement visual EEG arousal scoring by this automatic, objective, reproducible, cheap, and time-saving method. Citation: Basner M; Griefahn B; Müller U; Plath G; Samel A. An ECG-based Algorithm for the automatic identification of autonomic activations associated with cortical arousal. SLEEP 2007;30(10):1349-1361. PMID:17969469

  19. A Computer Model of the Evaporator for the Development of an Automatic Control System

    NASA Astrophysics Data System (ADS)

    Kozin, K. A.; Efremov, E. V.; Kabrysheva, O. P.; Grachev, M. I.

    2016-08-01

    For the implementation of a closed nuclear fuel cycle it is necessary to carry out a series of experimental studies to justify the choice of technology. In addition, the operation of the radiochemical plant is impossible without high-quality automatic control systems. In the technologies of spent nuclear fuel reprocessing, the method of continuous evaporation is often used for a solution conditioning. Therefore, the effective continuous technological process will depend on the operation of the evaporation equipment. Its essential difference from similar devices is a small size. In this paper the method of mathematic simulation is applied for the investigation of one-effect evaporator with an external heating chamber. Detailed modelling is quite difficult because the phase equilibrium dynamics of the evaporation process is not described. Moreover, there is a relationship with the other process units. The results proved that the study subject is a MIMO plant, nonlinear over separate control channels and not selfbalancing. Adequacy was tested using the experimental data obtained at the laboratory evaporation unit.

  20. Automatic speech recognizer based on the Spanish spoken in Valdivia, Chile

    NASA Astrophysics Data System (ADS)

    Sanchez, Maria L.; Poblete, Victor H.; Sommerhoff, Jorge

    2004-05-01

    The performance of an automatic speech recognizer is affected by training process (dependent on or independent of the speaker) and the size of the vocabulary. The language used in this study was the Spanish spoken in the city of Valdivia, Chile. A representative sample of 14 students and six professionals all natives of Valdivia (ten women and ten men) were used to complete the study. The sample ranged in age between 20 and 30 years old. Two systems were programmed based on the classical principles: digitalizing, end point detection, linear prediction coding, cepstral coefficients, dynamic time warping, and a final decision stage with a previous step of training: (i) one dependent speaker (15 words: five colors and ten numbers), (ii) one independent speaker (30 words: ten verbs, ten nouns, and ten adjectives). A simple didactical application, with options to choose colors, numbers and drawings of the verbs, nouns and adjectives, was designed to be used with a personal computer. In both programs, the tests carried out showed a tendency towards errors in short words with monosyllables like ``flor,'' and ``sol.'' The best results were obtained in words with three syllables like ``disparar'' and ``mojado.'' [Work supported by Proyecto DID UACh N S-200278.

  1. A Hessian-based methodology for automatic surface crack detection and classification from pavement images

    NASA Astrophysics Data System (ADS)

    Ghanta, Sindhu; Shahini Shamsabadi, Salar; Dy, Jennifer; Wang, Ming; Birken, Ralf

    2015-04-01

    Around 3,000,000 million vehicle miles are annually traveled utilizing the US transportation systems alone. In addition to the road traffic safety, maintaining the road infrastructure in a sound condition promotes a more productive and competitive economy. Due to the significant amounts of financial and human resources required to detect surface cracks by visual inspection, detection of these surface defects are often delayed resulting in deferred maintenance operations. This paper introduces an automatic system for acquisition, detection, classification, and evaluation of pavement surface cracks by unsupervised analysis of images collected from a camera mounted on the rear of a moving vehicle. A Hessian-based multi-scale filter has been utilized to detect ridges in these images at various scales. Post-processing on the extracted features has been implemented to produce statistics of length, width, and area covered by cracks, which are crucial for roadway agencies to assess pavement quality. This process has been realized on three sets of roads with different pavement conditions in the city of Brockton, MA. A ground truth dataset labeled manually is made available to evaluate this algorithm and results rendered more than 90% segmentation accuracy demonstrating the feasibility of employing this approach at a larger scale.

  2. Automatic image orientation detection via confidence-based integration of low-level and semantic cues.

    PubMed

    Luo, Jiebo; Boutell, Matthew

    2005-05-01

    Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.

  3. Development of Automatic Controller of Brain Temperature Based on the Conditions of Clinical Use

    NASA Astrophysics Data System (ADS)

    Utsuki, Tomohiko; Wakamatsu, Hidetoshi

    A new automatic controller of brain temperature was developed based on the inevitable conditions of its clinical use from the viewpoint of various kinds of feasibility, in particular, electric power consumption of less than 1,500W in ICU. The adaptive algorithm was employed to cope with individual time-varying characteristic change of patients. The controller under water-surface cooling hypothermia requires much power for the frequent regulation of the water temperature of cooling blankets. Thus, in this study, the power consumption of the controller was checked by several kinds of examinations involving the control simulation of brain temperature using a mannequin with thermal characteristics similar to that of adult patients. The required accuracy of therapeutic brain hypothermia, i.e. control deviation within ±0.1C was experimentally confirmed using “root mean square of the control error”, despite the present controller consumes less energy comparing with the one in the case of our conventional controller, where it can still keeps remaining power margin more than 300W even in the full operation. Thereby, the clinically required water temperature was also confirmed within the limit of power supply, thus its practical application is highly expected with less physical burden of medical staff inclusive of more usability and more medical cost performance.

  4. Photoelectric scanning-based method for positioning omnidirectional automatic guided vehicle

    NASA Astrophysics Data System (ADS)

    Huang, Zhe; Yang, Linghui; Zhang, Yunzhi; Guo, Yin; Ren, Yongjie; Lin, Jiarui; Zhu, Jigui

    2016-03-01

    Automatic guided vehicle (AGV) as a kind of mobile robot has been widely used in many applications. For better adapting to the complex working environment, more and more AGVs are designed to be omnidirectional by being equipped with Mecanum wheels for increasing their flexibility and maneuverability. However, as the AGV with this kind of wheels suffers from the position errors mainly because of the frequent slipping property, how to measure its position accurately in real time is an extremely important issue. Among the ways of achieving it, the photoelectric scanning methodology based on angle measurement is efficient. Hence, we propose a feasible method to ameliorate the positioning process, which mainly integrates four photoelectric receivers and one laser transmitter. To verify the practicality and accuracy, actual experiments and computer simulations have been conducted. In the simulation, the theoretical positioning error is less than 0.28 mm in a 10 m×10 m space. In the actual experiment, the performances about the stability, accuracy, and dynamic capability of this method were inspected. It demonstrates that the system works well and the performance of the position measurement is high enough to fulfill the mainstream tasks.

  5. Automatic classification of infant sleep based on instantaneous frequencies in a single-channel EEG signal.

    PubMed

    Čić, Maja; Šoda, Joško; Bonković, Mirjana

    2013-12-01

    This study presents a novel approach for the electroencephalogram (EEG) signal quantification in which the empirical mode decomposition method, a time-frequency method designated for nonlinear and non-stationary signals, decomposes the EEG signal into intrinsic mode functions (IMF) with corresponding frequency ranges that characterize the appropriate oscillatory modes embedded in the brain neural activity acquired using EEG. To calculate the instantaneous frequency of IMFs, an algorithm was developed using the Generalized Zero Crossing method. From the resulting frequencies, two different novel features were generated: the median instantaneous frequencies and the number of instantaneous frequency changes during a 30s segment for seven IMFs. The sleep stage classification for the daytime sleep of 20 healthy babies was determined using the Support Vector Machine classification algorithm. The results were evaluated using the cross-validation method to achieve an approximately 90% accuracy and with new examinee data to achieve 80% average accuracy of classification. The obtained results were higher than the human experts' agreement and were statistically significant, which positioned the method, based on the proposed features, as an efficient procedure for automatic sleep stage classification. The uniqueness of this study arises from newly proposed features of the time-frequency domain, which bind characteristics of the sleep signals to the oscillation modes of brain activity, reflecting the physical characteristics of sleep, and thus have the potential to highlight the congruency of twin pairs with potential implications for the genetic determination of sleep.

  6. A Novel Method Based on Learning Automata for Automatic Lesion Detection in Breast Magnetic Resonance Imaging

    PubMed Central

    Salehi, Leila; Azmi, Reza

    2014-01-01

    Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. In this way, magnetic resonance imaging (MRI) is emerging as a powerful tool for the detection of breast cancer. Breast MRI presently has two major challenges. First, its specificity is relatively poor, and it detects many false positives (FPs). Second, the method involves acquiring several high-resolution image volumes before, during, and after the injection of a contrast agent. The large volume of data makes the task of interpretation by the radiologist both complex and time-consuming. These challenges have led to the development of the computer-aided detection systems to improve the efficiency and accuracy of the interpretation process. Detection of suspicious regions of interests (ROIs) is a critical preprocessing step in dynamic contrast-enhanced (DCE)-MRI data evaluation. In this regard, this paper introduces a new automatic method to detect the suspicious ROIs for breast DCE-MRI based on region growing. The results indicate that the proposed method is thoroughly able to identify suspicious regions (accuracy of 75.39 ± 3.37 on PIDER breast MRI dataset). Furthermore, the FP per image in this method is averagely 7.92, which shows considerable improvement comparing to other methods like ROI hunter. PMID:25298929

  7. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  8. Fully automatic cardiac segmentation from 3D CTA data: a multi-atlas based approach

    NASA Astrophysics Data System (ADS)

    Kirisli, Hortense A.; Schaap, Michiel; Klein, Stefan; Neefjes, Lisan A.; Weustink, Annick C.; Van Walsum, Theo; Niessen, Wiro J.

    2010-03-01

    Computed tomography angiography (CTA), a non-invasive imaging technique, is becoming increasingly popular for cardiac examination, mainly due to its superior spatial resolution compared to MRI. This imaging modality is currently widely used for the diagnosis of coronary artery disease (CAD) but it is not commonly used for the diagnosis of ventricular and atrial function. In this paper, we present a fully automatic method for segmenting the whole heart (i.e. the outer surface of the myocardium) and cardiac chambers from CTA datasets. Cardiac chamber segmentation is particularly valuable for the extraction of ventricular and atrial functional information, such as stroke volume and ejection fraction. With our approach, we aim to improve the diagnosis of CAD by providing functional information extracted from the same CTA data, thus not requiring additional scanning. In addition, the whole heart segmentation method we propose can be used for visualization of the coronary arteries and for obtaining a region of interest for subsequent segmentation of the coronaries, ventricles and atria. Our approach is based on multi-atlas segmentation, and performed within a non-rigid registration framework. A leave-one-out quantitative validation was carried out on 8 images. The method showed a high accuracy, which is reflected in both a mean segmentation error of 1.05+/-1.30 mm and an average Dice coefficient of 0.93. The robustness of the method is demonstrated by successfully applying the method to 243 additional datasets, without any significant failure.

  9. Automatic co-segmentation of lung tumor based on random forest in PET-CT images

    NASA Astrophysics Data System (ADS)

    Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian

    2016-03-01

    In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.

  10. Liver segmentation in MRI: A fully automatic method based on stochastic partitions.

    PubMed

    López-Mir, F; Naranjo, V; Angulo, J; Alcañiz, M; Luna, L

    2014-04-01

    There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marker-controlled algorithm. To improve accuracy of selected contours, the gradient of the original image is successfully enhanced by applying a new variant of stochastic watershed. Moreover, a final classifier is performed in order to obtain the final liver mask. Optimal parameters of the method are tuned using a training dataset and then they are applied to the rest of studies (17 datasets). The obtained results (a Jaccard coefficient of 0.91 ± 0.02) in comparison to other methods demonstrate that the new variant of stochastic watershed is a robust tool for automatic segmentation of the liver in MRI.

  11. Research of automatic counting paper money technology based on two-dimensional histogram θ-division

    NASA Astrophysics Data System (ADS)

    Liu, Yongze; Meng, Qingshen; Song, Xuejun; Li, Aiting

    2011-12-01

    At present, the most technology of counting money is to use the money counter in financial fields. The paper presents a new method for automatic counting paper money which is based on image processing technology. Firstly, the paper money image is acquired by CCD. After analyzing the feature of image, we find that in Cr-space the edge of each paper money is enhanced. Then we use the north-west sobel operator for filtering and north sobel operator for detecting edge. Although the image-processed better highlight the edge of each paper money, the edge is rough and its variance is high. It is hardly to threshold the image for getting the single-pixel edge linked. After Different segmentation algorithm was been used for deriving the edge of paper money, we find the Two-dimensional Histogram θ-division algorithm is suitable for our purpose. The experimental result is proved satisfied. The detecting rate reached 100% in controlled environment for RMB. However, if we want to detect other kinds of paper money such as dollar, there also have several problems to be solved.

  12. Automatic detection of micro-aneurysms in retinal images based on curvelet transform and morphological operations

    NASA Astrophysics Data System (ADS)

    Mohammad Alipour, Shirin Hajeb; Rabbani, Hossein

    2013-09-01

    Diabetic retinopathy (DR) is one of the major complications of diabetes that changes the blood vessels of the retina and distorts patient vision that finally in high stages can lead to blindness. Micro-aneurysms (MAs) are one of the first pathologies associated with DR. The number and the location of MAs are very important in grading of DR. Early diagnosis of micro-aneurysms (MAs) can reduce the incidence of blindness. As MAs are tiny area of blood protruding from vessels in the retina and their size is about 25 to 100 microns, automatic detection of these tiny lesions is still challenging. MAs occurring in the macula can lead to visual loss. Also the position of a lesion such as MAs relative to the macula is a useful feature for analysis and classification of different stages of DR. Because MAs are more distinguishable in fundus fluorescin angiography (FFA) compared to color fundus images, we introduce a new method based on curvelet transform and morphological operations for MAs detection in FFA images. As vessels and MAs are the bright parts of FFA image, firstly extracted vessels by curvelet transform are removed from image. Then morphological operations are applied on resulted image for detecting MAs.

  13. Novel system for automatic measuring diopter based on ARM circuit block

    NASA Astrophysics Data System (ADS)

    Xue, Feng; Zhong, Lei; Chen, Zhe; Xue, Deng-pan; Li, Xiang-ning

    2009-07-01

    Traditional commercial instruments utilized in vision screening programs cannot satisfy the request for real-time diopter measurement by far, and their success is limited by some defectiveness such as computer-attached, clumsy volume, and low accuracy of parameters measured, etc. In addition, astigmatic eyes cannot be determined in many devices. This paper proposes a new design of diopter measurement system based on SAMSUNG's ARM9 circuit block. There are several contributions in the design. The new developed system has not only the function of automatically measuring diopter, but also the advantages of the low cost, and especially the simplicity and portability. Besides, by placing point sources in three directions, the instrument can determine astigmatic eyes at the same time. Most of the details are introduced as the integrated design of measuring system, interface circuit of embedded system and so on. Through a preliminary experiment, it is proved that the system keeps good feasibility and validity. The maximum deviation of measurement result is 0.344D.The experimental results also demonstrate the system can provide the service needed for real-time applications. The instrument present here is expected to be widely applied in many fields such as the clinic and home healthcare.

  14. Reaction-contingency based bipartite Boolean modelling

    PubMed Central

    2013-01-01

    Background Intracellular signalling systems are highly complex, rendering mathematical modelling of large signalling networks infeasible or impractical. Boolean modelling provides one feasible approach to whole-network modelling, but at the cost of dequantification and decontextualisation of activation. That is, these models cannot distinguish between different downstream roles played by the same component activated in different contexts. Results Here, we address this with a bipartite Boolean modelling approach. Briefly, we use a state oriented approach with separate update rules based on reactions and contingencies. This approach retains contextual activation information and distinguishes distinct signals passing through a single component. Furthermore, we integrate this approach in the rxncon framework to support automatic model generation and iterative model definition and validation. We benchmark this method with the previously mapped MAP kinase network in yeast, showing that minor adjustments suffice to produce a functional network description. Conclusions Taken together, we (i) present a bipartite Boolean modelling approach that retains contextual activation information, (ii) provide software support for automatic model generation, visualisation and simulation, and (iii) demonstrate its use for iterative model generation and validation. PMID:23835289

  15. Influence of CBCT parameters on the output of an automatic edge-detection-based endodontic segmentation

    PubMed Central

    Georgelin-Gurgel, M; Mallet, J-P; Diemer, F; Boulanouar, K

    2015-01-01

    Objectives: To determine the optimal CBCT settings for an automatic edge-detection-based endodontic segmentation procedure by assessing the accuracy of the root canal measurements. Methods: 12 intact teeth with closed apexes were cut perpendicular to the root axis, at pre-determined levels to the reference plane (the first section made before acquisition). Acquisitions of each specimen were performed with Kodak 9000® 3D (76 µm, 14 bits; Kodak Carestream Health, Trophy, France) by using different combinations of milliamperes and kilovolts. Three-dimensional images were displayed and root canals were segmented with the MeVisLab software (edge-detection-based method; MeVis Research, Bremen, Germany). Histological root canal sections were then digitized with a 0.5- to 1.0-µm resolution and compared with equivalent two-dimensional cone-beam reconstructions for each pair of settings using the Pearson correlation coefficient, regression analysis and Bland–Altman method for the canal area and Feret's diameter. After a ranking process, a Wilcoxon paired test was carried out to compare the pair of settings. Results: The best pair of acquisition settings was 3.2 mA/60 kV. Significant differences were found between 3.2 mA/60 kV and other settings (p < 0.05) for the root canal area and for Feret's diameter. Conclusions: The quantitative analyses of the root canal system with the edge-detection-based method could depend on acquisition parameters. Improvements in segmentation still need to be carried out to ensure the quality of the reconstructions when we have to deal with closer outlines and because of the low spatial resolution. PMID:26119343

  16. Automatic hippocampus segmentation of 7.0 Tesla MR images by combining multiple atlases and auto-context models

    PubMed Central

    Kim, Minjeong; Wu, Guorong; Li, Wei; Wang, Li; Son, Young-Don; Cho, Zang-Hee; Shen, Dinggang

    2014-01-01

    In many neuroscience and clinical studies, accurate measurement of hippocampus is very important to reveal the inter-subject anatomical differences or the subtle intra-subject longitudinal changes due to aging or dementia. Although many automatic segmentation methods have been developed, their performances are still challenged by the poor image contrast of hippocampus in the MR images acquired especially from 1.5 or 3.0 Tesla (T) scanners. With the recent advance of imaging technology, 7.0 T scanner provides much higher image contrast and resolution for hippocampus study. However, the previous methods developed for segmentation of hippocampus from 1.5 T or 3.0 T images do not work for the 7.0 T images, due to different levels of imaging contrast and texture information. In this paper, we present a learning-based algorithm for automatic segmentation of hippocampi from 7.0 T images, by taking advantages of the state-of-the-art multi-atlas framework and also the auto-context model (ACM). Specifically, ACM is performed in each atlas domain to iteratively construct sequences of location-adaptive classifiers by integrating both image appearance and local context features. Due to the plenty texture information in 7.0 T images, more advanced texture features are also extracted and incorporated into the ACM during the training stage. Then, under the multi-atlas segmentation framework, multiple sequences of ACM-based classifiers are trained for all atlases to incorporate the anatomical variability. In the application stage, for a new image, its hippocampus segmentation can be achieved by fusing the labeling results from all atlases, each of which is obtained by applying the atlas-specific ACM-based classifiers. Experimental results on twenty 7.0 T images with the voxel size of 0.35 × 0.35 × 0.35 mm3 show very promising hippocampus segmentations (in terms of Dice overlap ratio 89.1 ± 0.020), indicating high applicability for the future clinical and neuroscience studies

  17. Markov Random Field Based Automatic Image Alignment for ElectronTomography

    SciTech Connect

    Moussavi, Farshid; Amat, Fernando; Comolli, Luis R.; Elidan, Gal; Downing, Kenneth H.; Horowitz, Mark

    2007-11-30

    Cryo electron tomography (cryo-ET) is the primary method for obtaining 3D reconstructions of intact bacteria, viruses, and complex molecular machines ([7],[2]). It first flash freezes a specimen in a thin layer of ice, and then rotates the ice sheet in a transmission electron microscope (TEM) recording images of different projections through the sample. The resulting images are aligned and then back projected to form the desired 3-D model. The typical resolution of biological electron microscope is on the order of 1 nm per pixel which means that small imprecision in the microscope's stage or lenses can cause large alignment errors. To enable a high precision alignment, biologists add a small number of spherical gold beads to the sample before it is frozen. These beads generate high contrast dots in the image that can be tracked across projections. Each gold bead can be seen as a marker with a fixed location in 3D, which provides the reference points to bring all the images to a common frame as in the classical structure from motion problem. A high accuracy alignment is critical to obtain a high resolution tomogram (usually on the order of 5-15nm resolution). While some methods try to automate the task of tracking markers and aligning the images ([8],[4]), they require user intervention if the SNR of the image becomes too low. Unfortunately, cryogenic electron tomography (or cryo-ET) often has poor SNR, since the samples are relatively thick (for TEM) and the restricted electron dose usually results in projections with SNR under 0 dB. This paper shows that formulating this problem as a most-likely estimation task yields an approach that is able to automatically align with high precision cryo-ET datasets using inference in graphical models. This approach has been packaged into a publicly available software called RAPTOR-Robust Alignment and Projection estimation for Tomographic Reconstruction.

  18. Automatic Test Program Generation.

    DTIC Science & Technology

    1978-03-01

    presents a test description language, NOPAL , in which a user may describe diagnostic tests, and a software system which automatically generates test...programs for an automatic test equipment based on the descriptions of tests. The software system accepts as input the tests specified in NOPAL , performs

  19. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon

    2016-02-01

    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  20. Graph Theory-Based Brain Connectivity for Automatic Classification of Multiple Sclerosis Clinical Courses

    PubMed Central

    Kocevar, Gabriel; Stamile, Claudio; Hannoun, Salem; Cotton, François; Vukusic, Sandra; Durand-Dubief, Françoise; Sappey-Marinier, Dominique

    2016-01-01

    Purpose: In this work, we introduce a method to classify Multiple Sclerosis (MS) patients into four clinical profiles using structural connectivity information. For the first time, we try to solve this question in a fully automated way using a computer-based method. The main goal is to show how the combination of graph-derived metrics with machine learning techniques constitutes a powerful tool for a better characterization and classification of MS clinical profiles. Materials and Methods: Sixty-four MS patients [12 Clinical Isolated Syndrome (CIS), 24 Relapsing Remitting (RR), 24 Secondary Progressive (SP), and 17 Primary Progressive (PP)] along with 26 healthy controls (HC) underwent MR examination. T1 and diffusion tensor imaging (DTI) were used to obtain structural connectivity matrices for each subject. Global graph metrics, such as density and modularity, were estimated and compared between subjects' groups. These metrics were further used to classify patients using tuned Support Vector Machine (SVM) combined with Radial Basic Function (RBF) kernel. Results: When comparing MS patients to HC subjects, a greater assortativity, transitivity, and characteristic path length as well as a lower global efficiency were found. Using all graph metrics, the best F-Measures (91.8, 91.8, 75.6, and 70.6%) were obtained for binary (HC-CIS, CIS-RR, RR-PP) and multi-class (CIS-RR-SP) classification tasks, respectively. When using only one graph metric, the best F-Measures (83.6, 88.9, and 70.7%) were achieved for modularity with previous binary classification tasks. Conclusion: Based on a simple DTI acquisition associated with structural brain connectivity analysis, this automatic method allowed an accurate classification of different MS patients' clinical profiles. PMID:27826224

  1. Graph Theory-Based Brain Connectivity for Automatic Classification of Multiple Sclerosis Clinical Courses.

    PubMed

    Kocevar, Gabriel; Stamile, Claudio; Hannoun, Salem; Cotton, François; Vukusic, Sandra; Durand-Dubief, Françoise; Sappey-Marinier, Dominique

    2016-01-01

    Purpose: In this work, we introduce a method to classify Multiple Sclerosis (MS) patients into four clinical profiles using structural connectivity information. For the first time, we try to solve this question in a fully automated way using a computer-based method. The main goal is to show how the combination of graph-derived metrics with machine learning techniques constitutes a powerful tool for a better characterization and classification of MS clinical profiles. Materials and Methods: Sixty-four MS patients [12 Clinical Isolated Syndrome (CIS), 24 Relapsing Remitting (RR), 24 Secondary Progressive (SP), and 17 Primary Progressive (PP)] along with 26 healthy controls (HC) underwent MR examination. T1 and diffusion tensor imaging (DTI) were used to obtain structural connectivity matrices for each subject. Global graph metrics, such as density and modularity, were estimated and compared between subjects' groups. These metrics were further used to classify patients using tuned Support Vector Machine (SVM) combined with Radial Basic Function (RBF) kernel. Results: When comparing MS patients to HC subjects, a greater assortativity, transitivity, and characteristic path length as well as a lower global efficiency were found. Using all graph metrics, the best F-Measures (91.8, 91.8, 75.6, and 70.6%) were obtained for binary (HC-CIS, CIS-RR, RR-PP) and multi-class (CIS-RR-SP) classification tasks, respectively. When using only one graph metric, the best F-Measures (83.6, 88.9, and 70.7%) were achieved for modularity with previous binary classification tasks. Conclusion: Based on a simple DTI acquisition associated with structural brain connectivity analysis, this automatic method allowed an accurate classification of different MS patients' clinical profiles.

  2. Automatic recognition of seismic intensity based on RS and GIS: a case study in Wenchuan Ms8.0 earthquake of China.

    PubMed

    Zhang, Qiuwen; Zhang, Yan; Yang, Xiaohong; Su, Bin

    2014-01-01

    In recent years, earthquakes have frequently occurred all over the world, which caused huge casualties and economic losses. It is very necessary and urgent to obtain the seismic intensity map timely so as to master the distribution of the disaster and provide supports for quick earthquake relief. Compared with traditional methods of drawing seismic intensity map, which require many investigations in the field of earthquake area or are too dependent on the empirical formulas, spatial information technologies such as Remote Sensing (RS) and Geographical Information System (GIS) can provide fast and economical way to automatically recognize the seismic intensity. With the integrated application of RS and GIS, this paper proposes a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity. The case study in Wenchuan Ms8.0 earthquake in China shows that the information on seismic intensity can be automatically extracted from remotely sensed images as quickly as possible after earthquake occurrence, and the Digital Intensity Model (DIM) can be used to visually query and display the distribution of seismic intensity.

  3. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After

  4. Acquire High Quality Meshes of Scale Models for AN Automatic Modelling Process

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Jacquot, K.; Chevrier, C.; Halin, G.

    2013-07-01

    Urban scale models depicting whole towns such as the hundred-scale model collection known as plans-reliefs are a valuable source of information of cities and their surroundings. These physical representations of French strongholds from the 17th through the 19th century suffer from many problems that are, among other things, wear and tear or the lack of visibility and accessibility. A virtual collection would allow remote accessibility for visitors as well as history researchers. Moreover, it may also be linked to other digital collections and therefore, promote the collection to make people come to the museums to see the physical scale models. We also work on other physical town scale models like Epinal for which the scale is a bit higher. In a first part, we define a protocol for acquiring 3D meshes of town scale models from both photogrammetric and scanning methods. Then we compare the results of both methods The photogrammetric protocol has been elaborated by choosing the most accurate software, 123DCatch, which asks for about 60 pictures, and defining the settings needed to obtain exploitable photographs. In the same way, we defined the devices and settings needed for the laser scan acquisition method. In a second part, we segment the 3D meshes in planes by using Geomagic, which has been chosen between several programs, for its accurate resulting geometry.

  5. Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish.

    PubMed

    Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C

    2015-12-01

    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.

  6. Automatic target recognition via classical detection theory

    NASA Astrophysics Data System (ADS)

    Morgan, Douglas R.

    1995-07-01

    Classical Bayesian detection and decision theory applies to arbitrary problems with underlying probabilistic models. When the models describe uncertainties in target type, pose, geometry, surround, scattering phenomena, sensor behavior, and feature extraction, then classical theory directly yields detailed model-based automatic target recognition (ATR) techniques. This paper reviews options and considerations arising under a general Bayesian framework for model- based ATR, including approaches to the major problems of acquiring probabilistic models and of carrying out the indicated Bayesian computations.

  7. A Sensitive and Automatic White Matter Fiber Tracts Model for Longitudinal Analysis of Diffusion Tensor Images in Multiple Sclerosis

    PubMed Central

    Stamile, Claudio; Kocevar, Gabriel; Cotton, François; Durand-Dubief, Françoise; Hannoun, Salem; Frindel, Carole; Guttmann, Charles R. G.; Rousseau, David; Sappey-Marinier, Dominique

    2016-01-01

    Diffusion tensor imaging (DTI) is a sensitive tool for the assessment of microstructural alterations in brain white matter (WM). We propose a new processing technique to detect, local and global longitudinal changes of diffusivity metrics, in homologous regions along WM fiber-bundles. To this end, a reliable and automatic processing pipeline was developed in three steps: 1) co-registration and diffusion metrics computation, 2) tractography, bundle extraction and processing, and 3) longitudinal fiber-bundle analysis. The last step was based on an original Gaussian mixture model providing a fine analysis of fiber-bundle cross-sections, and allowing a sensitive detection of longitudinal changes along fibers. This method was tested on simulated and clinical data. High levels of F-Measure were obtained on simulated data. Experiments on cortico-spinal tract and inferior fronto-occipital fasciculi of five patients with Multiple Sclerosis (MS) included in a weekly follow-up protocol highlighted the greater sensitivity of this fiber scale approach to detect small longitudinal alterations. PMID:27224308

  8. Why does lag affect the durability of memory-based automaticity: loss of memory strength or interference?

    PubMed

    Wilkins, Nicolas J; Rawson, Katherine A

    2013-10-01

    In Rickard, Lau, and Pashler's (2008) investigation of the lag effect on memory-based automaticity, response times were faster and proportion of trials retrieved was higher at the end of practice for short lag items than for long lag items. However, during testing after a delay, response times were slower and proportion of trials retrieved was lower for short lag items than for long lag items. The current study investigated the extent to which the lag effect on the durability of memory-based automaticity is due to interference or to the loss of memory strength with time. Participants repeatedly practiced alphabet subtraction items in short lag and long lag conditions. After practice, half of the participants were immediately tested and the other half were tested after a 7-day delay. Results indicate that the lag effect on the durability of memory-based automaticity is primarily due to interference. We discuss potential modification of current memory-based processing theories to account for these effects.

  9. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    PubMed

    Wieding, Jan; Souffrant, Robert; Fritsche, Andreas; Mittelmeier, Wolfram; Bader, Rainer

    2012-01-01

    The use of finite element analysis (FEA) has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with screws in the adjacent

  10. Automatic Recognition of Solar Features for Developing Data Driven Prediction Models of Solar Activity and Space Weather

    DTIC Science & Technology

    2012-07-06

    Ephemeral Brightening,” 2nd ATST – East Workshop In Solar Physics: Magnetic Fields From The Photosphere To The Corona , Washington D.C., Mar 2012. [6...AFRL-RV-PS- AFRL-RV-PS- TR-2012-0133 TR-2012-0133 AUTOMATIC RECOGNITION OF SOLAR FEATURES FOR DEVELOPING DATA DRIVEN PREDICTION MODELS OF... SOLAR ACTIVITY AND SPACE WEATHER Jason Jackiewicz New Mexico State University Department of Astronomy PO Box 30001, MSC 4500 Las

  11. Development and testing of a decision making based method to adjust automatically the harrowing intensity.

    PubMed

    Rueda-Ayala, Victor; Weis, Martin; Keller, Martina; Andújar, Dionisio; Gerhards, Roland

    2013-05-13

    Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS). The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  12. A fully automatic multi-atlas based segmentation method for prostate MR images

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Most of multi-atlas segmentation methods focus on the registration between the full-size volumes of the data set. Although the transformations obtained from these registrations may be accurate for the global field of view of the images, they may not be accurate for the local prostate region. This is because different magnetic resonance (MR) images have different fields of view and may have large anatomical variability around the prostate. To overcome this limitation, we proposed a two-stage prostate segmentation method based on a fully automatic multi-atlas framework, which includes the detection stage i.e. locating the prostate, and the segmentation stage i.e. extracting the prostate. The purpose of the first stage is to find a cuboid that contains the whole prostate as small cubage as possible. In this paper, the cuboid including the prostate is detected by registering atlas edge volumes to the target volume while an edge detection algorithm is applied to every slice in the volumes. At the second stage, the proposed method focuses on the registration in the region of the prostate vicinity, which can improve the accuracy of the prostate segmentation. We evaluated the proposed method on 12 patient MR volumes by performing a leave-one-out study. Dice similarity coefficient (DSC) and Hausdorff distance (HD) are used to quantify the difference between our method and the manual ground truth. The proposed method yielded a DSC of 83.4%+/-4.3%, and a HD of 9.3 mm+/-2.6 mm. The fully automated segmentation method can provide a useful tool in many prostate imaging applications.

  13. Automatic vs. Manual Categorization of Documents in Spanish.

    ERIC Educational Resources Information Center

    Figuerola, Carlos G.; Rodriguez, Angel Francisco Zazo; Berrocal, Jose Luis Alonso

    2001-01-01

    Describes an experiment in automatic categorization, which is based on the vector model, widely used in information retrieval. Shows how the construction of the class patterns was carried out. Discusses the evaluation measures adopted and results obtained in the automatic categorization of a collection of documents in Spanish. Describes the manual…

  14. Automatic transmission

    SciTech Connect

    Hamane, M.; Ohri, H.

    1989-03-21

    This patent describes an automatic transmission connected between a drive shaft and a driven shaft and comprising: a planetary gear mechanism including a first gear driven by the drive shaft, a second gear operatively engaged with the first gear to transmit speed change output to the driven shaft, and a third gear operatively engaged with the second gear to control the operation thereof; centrifugally operated clutch means for driving the first gear and the second gear. It also includes a ratchet type one-way clutch for permitting rotation of the third gear in the same direction as that of the drive shaft but preventing rotation in the reverse direction; the clutch means comprising a ratchet pawl supporting plate coaxially disposed relative to the drive shaft and integrally connected to the third gear, the ratchet pawl supporting plate including outwardly projection radial projections united with one another at base portions thereof.

  15. Comparison Of Stochastic Radial Basis Function And PEST For Automatic Calibration Of Computationally Expensive Groundwater Models With Application To Miyun-Huai-Shun Aquifer

    NASA Astrophysics Data System (ADS)

    Wan, Y.; Shoemaker, C.

    2012-12-01

    In this study, we compare performance of three optimization algorithms and propose a new hybrid method. The algorithms are applied to calibration of a groundwater model for part of Beijing water supply. The three optimization algorithms are the PEST Derivative-Based Algorithm, CMAES_P and Stochastic Radial Basis Function method. Our new hybrid method combines Stochastic RBF and PEST derivative-based algorithm, which provides PEST derivative-based algorithm with the starting points found by Stochastic RBF. This study compares the performances of the four algorithms for automatic parameter calibration of a groundwater model on three 28-parameter cases and two synthetic test function calibration problems. On the basis of 20 trials, the results show that Stochastic RBF is the best among the three and CMAES_P is superior to PEST. In addition, our hybrid method performs better in less complex problem, but still fails to beat Stochastic RBF in highly computationally expensive nonlinear cases.

  16. The VIMOS Public Extragalactic Redshift Survey (VIPERS). PCA-based automatic cleaning and reconstruction of survey spectra

    NASA Astrophysics Data System (ADS)

    Marchetti, A.; Garilli, B.; Granett, B. R.; Guzzo, L.; Iovino, A.; Scodeggio, M.; Bolzonella, M.; de la Torre, S.; Abbas, U.; Adami, C.; Bottini, D.; Cappi, A.; Cucciati, O.; Davidzon, I.; Franzetti, P.; Fritz, A.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; Polletta, M.; Pollo, A.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Arnouts, S.; Bel, J.; Branchini, E.; Coupon, J.; De Lucia, G.; Ilbert, O.; Moutard, T.; Moscardini, L.; Zamorani, G.

    2017-03-01

    Context. Identifying spurious reduction artefacts in galaxy spectra is a challenge for large surveys. Aims: We present an algorithm for identifying and repairing spurious residual features in sky-subtracted galaxy spectra by using data from the VIMOS Public Extragalactic Redshift Survey (VIPERS) as a test case. Methods: The algorithm uses principal component analysis (PCA) applied to the galaxy spectra in the observed frame to identify sky line residuals imprinted at characteristic wavelengths. We further model the galaxy spectra in the rest-frame using PCA to estimate the most probable continuum in the corrupted spectral regions, which are then repaired. Results: We apply the method to 90 000 spectra from the VIPERS survey and compare the results with a subset for which careful editing was performed by hand. We find that the automatic technique reproduces the time-consuming manual cleaning in a uniform and objective manner across a large data sample. The mask data products produced in this work are released together with the VIPERS second public data release (PDR-2). based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programs 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA at the Canada-France-Hawaii Telescope (CFHT), that is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, which is a collaborative project of NRC and CNRS. The VIPERS web site is http://www.vipers.inaf.it/.

  17. Treatment of “Bacterial Cystitis” in Fully Automatic Mechanical Models Simulating Conditions of Bacterial Growth in the Urinary Bladder

    PubMed Central

    O'Grady, F.; Mackintosh, I. P.; Greenwood, D.; Watson, B. W.

    1973-01-01

    Two fully automatic models are described in which growing cultures can be continuously diluted and periodically discharged producing conditions of growth resembling those of the infected urinary bladder. Both models generate a continuous record of the opacity of the growing culture and the second model also generates a record of the Eh. The effect of adding ampicillin to a sensitive strain of Escherichia coli growing in these conditions is described and the relation of the results to human therapy is discussed. ImagesFig. 1 PMID:4577943

  18. Automatic Calibration of a Distributed Rainfall-Runo