Science.gov

Sample records for automatic model based

  1. Model-Based Reasoning in Humans Becomes Automatic with Training

    PubMed Central

    Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J.

    2015-01-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load—a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239

  2. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  3. Automatic code generation from the OMT-based dynamic model

    SciTech Connect

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  4. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  5. Automatic brain segmentation and validation: image-based versus atlas-based deformable models

    NASA Astrophysics Data System (ADS)

    Aboutanos, Georges B.; Dawant, Benoit M.

    1997-04-01

    Due to the complexity of the brain surface, there is at present no segmentation method that proves to work automatically and consistently on any 3-D magnetic resonance (MR) images of the head. There is a definite lack of validation studies related to automatic brain extraction. In this work we present an image-base automatic method for brain segmentation and use its results as an input to a deformable model method which we call image-based deformable model. Combining image-based methods with a deformable model can lead to a robust segmentation method without requiring registration of the image volumes into a standardized space, the automation of which remains challenging for pathological cases. We validate our segmentation results on 3-D MP-RAGE (magnetization-prepared rapid gradient-echo) volumes for the image model prior- and post-deformation and compare it to an atlas model prior- and post-deformation. Our validation is based on volume measurement comparison to manually segmented data. Our analysis shows that the improvement afforded by the deformable model methods are statistically significant, however there are no significant differences between the image-based and atlas-based deformable model methods.

  6. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  7. Automatic shape model building based on principal geodesic analysis bootstrapping.

    PubMed

    Dam, Erik B; Fletcher, P Thomas; Pizer, Stephen M

    2008-04-01

    We present a novel method for automatic shape model building from a collection of training shapes. The result is a shape model consisting of the mean model and the major modes of variation with a dense correspondence map between individual shapes. The framework consists of iterations where a medial shape representation is deformed into the training shapes followed by computation of the shape mean and modes of shape variation. In the first iteration, a generic shape model is used as starting point - in the following iterations in the bootstrap method, the resulting mean and modes from the previous iteration are used. Thereby, we gradually capture the shape variation in the training collection better and better. Convergence of the method is explicitly enforced. The method is evaluated on collections of artificial training shapes where the expected shape mean and modes of variation are known by design. Furthermore, collections of real prostates and cartilage sheets are used in the evaluation. The evaluation shows that the method is able to capture the training shapes close to the attainable accuracy already in the first iteration. Furthermore, the correspondence properties measured by generality, specificity, and compactness are improved during the shape model building iterations.

  8. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  9. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    NASA Astrophysics Data System (ADS)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  10. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  11. Sparse appearance model-based algorithm for automatic segmentation and identification of articulated hand bones

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Peng, Zhigang; Liao, Shu; Shinagawa, Yoshihisa; Zhan, Yiqiang; Hermosillo, Gerardo; Zhou, Xiang Sean

    2014-03-01

    Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.

  12. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    NASA Astrophysics Data System (ADS)

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.

    2017-01-01

    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  13. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  14. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  15. Model-based vision system for automatic recognition of structures in dental radiographs

    NASA Astrophysics Data System (ADS)

    Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.

    1991-07-01

    X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.

  16. A chest-shape target automatic detection method based on Deformable Part Models

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  17. [Automatic detection of exudates in retinal images based on threshold moving average models].

    PubMed

    Wisaeng, K; Hiransakolwong, N; Pothiruk, E

    2015-01-01

    Since exudate diagnostic procedures require the attention of an expert ophthalmologist as well as regular monitoring of the disease, the workload of expert ophthalmologists will eventually exceed the current screening capabilities. Retinal imaging technology is a current practice screening capability providing a great potential solution. In this paper, a fast and robust automatic detection of exudates based on moving average histogram models of the fuzzy image was applied, and then the better histogram was derived. After segmentation of the exudate candidates, the true exudates were pruned based on Sobel edge detector and automatic Otsu's thresholding algorithm that resulted in the accurate location of the exudates in digital retinal images. To compare the performance of exudate detection methods we have constructed a large database of digital retinal images. The method was trained on a set of 200 retinal images, and tested on a completely independent set of 1220 retinal images. Results show that the exudate detection method performs overall best sensitivity, specificity, and accuracy of 90.42%, 94.60%, and 93.69%, respectively.

  18. Contour-based automatic crater recognition using digital elevation models from Chang'E missions

    NASA Astrophysics Data System (ADS)

    Zuo, Wei; Zhang, Zhoubin; Li, Chunlai; Wang, Rongwu; Yu, Linjie; Geng, Liang

    2016-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). Through DEM and image processing, this method can be used to reconstruct contour surfaces, extract and combine contour lines, set the characteristic parameters of crater morphology, and establish a crater pattern recognition program. The method has been tested and verified with DEM data from Chang'E-1 (CE-1) and Chang'E-2 (CE-2), showing a strong crater recognition ability with high detection rate, high robustness, and good adaptation to recognize various craters with different diameter and morphology. The method has been used to identify craters with high precision and accuracy on the Moon. The results meet requirements for supporting exploration and related scientific research for the Moon and planets.

  19. Automatic mathematical modeling for space application

    NASA Technical Reports Server (NTRS)

    Wang, Caroline K.

    1987-01-01

    A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.

  20. A Parameter Tuning Scheme of Sea-ice Model Based on Automatic Differentiation Technique

    NASA Astrophysics Data System (ADS)

    Kim, J. G.; Hovland, P. D.

    2001-05-01

    Automatic diferentiation (AD) technique was used to illustrate a new approach for parameter tuning scheme of an uncoupled sea-ice model. Atmospheric forcing field of 1992 obtained from NCEP data was used as enforcing variables in the study. The simulation results were compared with the observed ice movement provided by the International Arctic Buoy Programme (IABP). All of the numerical experiments were based on a widely used dynamic and thermodynamic model for simulating the seasonal sea-ice chnage of the main Arctic ocean. We selected five dynamic and thermodynamic parameters for the tuning process in which the cost function defined by the norm of the difference between observed and simulated ice drift locations was minimized. The selected parameters are the air and ocean drag coefficients, the ice strength constant, the turning angle at ice-air/ocean interface, and the bulk sensible heat transfer coefficient. The drag coefficients were the major parameters to control sea-ice movement and extent. The result of the study shows that more realistic simulations of ice thickness distribution was produced by tuning the simulated ice drift trajectories. In the tuning process, the L-BFCGS-B minimization algorithm of a quasi-Newton method was used. The derivative information required in the minimization iterations was provided by the AD processed Fortran code. Compared with a conventional approach, AD generated derivative code provided fast and robust computations of derivative information.

  1. Modelling Pasture-based Automatic Milking System Herds: Grazeable Forage Options

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    One of the challenges to increase milk production in a large pasture-based herd with an automatic milking system (AMS) is to grow forages within a 1-km radius, as increases in walking distance increases milking interval and reduces yield. The main objective of this study was to explore sustainable forage option technologies that can supply high amount of grazeable forages for AMS herds using the Agricultural Production Systems Simulator (APSIM) model. Three different basic simulation scenarios (with irrigation) were carried out using forage crops (namely maize, soybean and sorghum) for the spring-summer period. Subsequent crops in the three scenarios were forage rape over-sown with ryegrass. Each individual simulation was run using actual climatic records for the period from 1900 to 2010. Simulated highest forage yields in maize, soybean and sorghum- (each followed by forage rape-ryegrass) based rotations were 28.2, 22.9, and 19.3 t dry matter/ha, respectively. The simulations suggested that the irrigation requirement could increase by up to 18%, 16%, and 17% respectively in those rotations in El-Niño years compared to neutral years. On the other hand, irrigation requirement could increase by up to 25%, 23%, and 32% in maize, soybean and sorghum based rotations in El-Nino years compared to La-Nina years. However, irrigation requirement could decrease by up to 8%, 7%, and 13% in maize, soybean and sorghum based rotations in La-Nina years compared to neutral years. The major implication of this study is that APSIM models have potentials in devising preferred forage options to maximise grazeable forage yield which may create the opportunity to grow more forage in small areas around the AMS which in turn will minimise walking distance and milking interval and thus increase milk production. Our analyses also suggest that simulation analysis may provide decision support during climatic uncertainty. PMID:25924963

  2. Automatic language identification based on Gaussian mixture model and universal background model

    NASA Astrophysics Data System (ADS)

    Qu, Dan; Wang, Bingxi; Wei, Xin

    2003-09-01

    When compared with speech technologies in speech processing, automatic language identification is a relatively new yet difficult problem. In this paper, a language identification algorithm is provided and some experiments are conducted using OGI multi-language telephone speech corpus (OGI-TS). Then experiments results are described. It is shown that GMM-UBM is another efficient method to language identification problems.

  3. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  4. Automatic detection of echolocation clicks based on a Gabor model of their waveform.

    PubMed

    Madhusudhana, Shyam; Gavrilov, Alexander; Erbe, Christine

    2015-06-01

    Prior research has shown that echolocation clicks of several species of terrestrial and marine fauna can be modelled as Gabor-like functions. Here, a system is proposed for the automatic detection of a variety of such signals. By means of mathematical formulation, it is shown that the output of the Teager-Kaiser Energy Operator (TKEO) applied to Gabor-like signals can be approximated by a Gaussian function. Based on the inferences, a detection algorithm involving the post-processing of the TKEO outputs is presented. The ratio of the outputs of two moving-average filters, a Gaussian and a rectangular filter, is shown to be an effective detection parameter. Detector performance is assessed using synthetic and real (taken from MobySound database) recordings. The detection method is shown to work readily with a variety of echolocation clicks and in various recording scenarios. The system exhibits low computational complexity and operates several times faster than real-time. Performance comparisons are made to other publicly available detectors including pamguard.

  5. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  6. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model

    SciTech Connect

    He, Baochun; Huang, Cheng; Zhou, Shoujun; Hu, Qingmao; Jia, Fucang; Sharp, Gregory; Fang, Chihua; Fan, Yingfang

    2016-05-15

    Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach

  7. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.

    PubMed

    He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang

    2016-05-01

    A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver

  8. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  9. Intraventricular vector flow mapping—a Doppler-based regularized problem with automatic model selection

    NASA Astrophysics Data System (ADS)

    Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien

    2017-09-01

    We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.

  10. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  11. A physics-based defects model and inspection algorithm for automatic visual inspection

    NASA Astrophysics Data System (ADS)

    Xie, Yu; Ye, Yutang; Zhang, Jing; Liu, Li; Liu, Lin

    2014-01-01

    The representation of physical characteristics is the most essential feature of mathematical models used for the detection of defects in automatic inspection systems. However, the feature of defects and formation of the defect image are not considered enough in traditional algorithms. This paper presents a mathematical model for defect inspection, denoted as the localized defects image model (LDIM), is different because it modeling the features of manual inspection, using a local defect merit function to quantify the cost that a pixel is defective. This function comprises two components: color deviation and color fluctuation. Parameters related to statistical data of the background region of images are also taken into consideration. Test results demonstrate that the model matches the definition of defects, as defined by international industrial standards IPC-A-610D and IPC-A-600G. Furthermore, the proposed approach enhances small defects to improve detection rates. Evaluation using a defects images database returned a 100% defect inspection rate with 0% false detection. Proving that this method could be practically applied in manufacture to quantify inspection standards and minimize false alarms resulting from human error.

  12. Automatic left-atrial segmentation from cardiac 3D ultrasound: a dual-chamber model-based approach

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno; Sarvari, Sebastian I.; Orderud, Fredrik; Gérard, Olivier; D'hooge, Jan; Samset, Eigil

    2016-04-01

    In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of a priori anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03+/-0.6 mm). The AV plane was detected with an accuracy of -0.6+/-1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean +/-1.96 SD): 0.4+/-5.3 ml, 2.1+/-12.6 ml, and 1.5+/-7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.

  13. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  14. Automatic sex determination of skulls based on a statistical shape model.

    PubMed

    Luo, Li; Wang, Mengyang; Tian, Yun; Duan, Fuqing; Wu, Zhongke; Zhou, Mingquan; Rozenholc, Yves

    2013-01-01

    Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy.

  15. A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor.

    PubMed

    Madrigal, Carlos A; Branch, John W; Restrepo, Alejandro; Mery, Domingo

    2017-10-02

    Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%.

  16. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    NASA Astrophysics Data System (ADS)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  17. Pedestrians' intention to jaywalk: Automatic or planned? A study based on a dual-process model in China.

    PubMed

    Xu, Yaoshan; Li, Yongjuan; Zhang, Feng

    2013-01-01

    The present study investigates the determining factors of Chinese pedestrians' intention to violate traffic laws using a dual-process model. This model divides the cognitive processes of intention formation into controlled analytical processes and automatic associative processes. Specifically, the process explained by the augmented theory of planned behavior (TPB) is controlled, whereas the process based on past behavior is automatic. The results of a survey conducted on 323 adult pedestrian respondents showed that the two added TPB variables had different effects on the intention to violate, i.e., personal norms were significantly related to traffic violation intention, whereas descriptive norms were non-significant predictors. Past behavior significantly but uniquely predicted the intention to violate: the results of the relative weight analysis indicated that the largest percentage of variance in pedestrians' intention to violate was explained by past behavior (42%). According to the dual-process model, therefore, pedestrians' intention formation relies more on habit than on cognitive TPB components and social norms. The implications of these findings for the development of intervention programs are discussed.

  18. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  19. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  20. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    PubMed Central

    Ebied, Hala Mousher; Hussein, Ashraf Saad; Tolba, Mohamed Fahmy

    2014-01-01

    This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used. PMID:25254226

  1. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  2. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    PubMed

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-03-25

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. An active contour-based atlas registration model applied to automatic subthalamic nucleus targeting on MRI: method and validation.

    PubMed

    Duay, Valérie; Bresson, Xavier; Castro, Javier Sanchez; Pollo, Claudio; Cuadra, Meritxell Bach; Thiran, Jean-Philippe

    2008-01-01

    This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

  4. Low-rank and sparse decomposition based shape model and probabilistic atlas for automatic pathological organ segmentation.

    PubMed

    Shi, Changfa; Cheng, Yuanzhi; Wang, Jinke; Wang, Yadong; Mori, Kensaku; Tamura, Shinichi

    2017-05-01

    One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. An Automatic Image-Based Modelling Method Applied to Forensic Infography

    PubMed Central

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  6. An automatic image-based modelling method applied to forensic infography.

    PubMed

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  7. A statistically based seasonal precipitation forecast model with automatic predictor selection and its application to central and south Asia

    NASA Astrophysics Data System (ADS)

    Gerlitz, Lars; Vorogushyn, Sergiy; Apel, Heiko; Gafurov, Abror; Unger-Shayesteh, Katy; Merz, Bruno

    2016-11-01

    The study presents a statistically based seasonal precipitation forecast model, which automatically identifies suitable predictors from globally gridded sea surface temperature (SST) and climate variables by means of an extensive data-mining procedure and explicitly avoids the utilization of typical large-scale climate indices. This leads to an enhanced flexibility of the model and enables its automatic calibration for any target area without any prior assumption concerning adequate predictor variables. Potential predictor variables are derived by means of a cell-wise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability-based cluster analysis. Finally, for every month and lead time, an individual random-forest-based forecast model is constructed, by means of the preliminary generated predictor variables. Monthly predictions are aggregated to running 3-month periods in order to generate a seasonal precipitation forecast. The model is applied and evaluated for selected target regions in central and south Asia. Particularly for winter and spring in westerly-dominated central Asia, correlation coefficients between forecasted and observed precipitation reach values up to 0.48, although the variability of precipitation rates is strongly underestimated. Likewise, for the monsoonal precipitation amounts in the south Asian target area, correlations of up to 0.5 were detected. The skill of the model for the dry winter season over south Asia is found to be low. A sensitivity analysis with well-known climate indices, such as the El Niño- Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and the East Atlantic (EA) pattern, reveals the major large-scale controlling mechanisms of the seasonal precipitation climate for each target area. For the central Asian target areas, both

  8. An automatic layout system for OMT-based object diagram

    SciTech Connect

    Nakashima, Satoshi; Ali, Jauhar; Tanaka, Jiro

    1996-12-31

    In this paper, we propose an automatic layout method for the object diagram, the event trace diagram and the state diagram based on OMT (Object Modeling Technique) methodology. In our automatic layout system, when the elements of model (classes, associations etc.) are entered, an arrangement for them is computed, and the object model automatically appears in the editor`s window. We adopted Messinger`s algorithm using the rule of divide-and-conquer for the layout algorithm of the object diagram. Furthermore, diagrams can be maintained easily with the capabilities of automatic modification and direct manipulation interface.

  9. An approach of crater automatic recognition based on contour digital elevation model from Chang'E Missions

    NASA Astrophysics Data System (ADS)

    Zuo, W.; Li, C.; Zhang, Z.; Li, H.; Feng, J.

    2015-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). First, we mapped 16-bits DEM to 256 gray scales for data compression, then for the purposes of better visualization, the grayscale is converted into RGB image. After that, a median filter is applied twice to DEM for data optimization, which produced smooth, continuous outlines for subsequent construction of contour plane. Considering the fact that the morphology of crater on contour plane can be approximately expressed as an ellipse or circle, we extract the outer boundaries of contour plane with the same color(gray value) as targets for further identification though a 8- neighborhood counterclockwise searching method. Then, A library of training samples is constructed based on above targets calculated from some sample DEM data, from which real crater targets are labeled as positive samples manually, and non-crater objects are labeled as negative ones. Some morphological feathers are calculated for all these samples, which are major axis (L), circumference(C), area inside the boundary(S), and radius of the largest inscribed circle(R). We use R/L, R/S, C/L, C/S, R/C, S/L as the key factors for identifying craters, and apply Fisher discrimination method on the sample library to calculate the weight of each factor and determine the discrimination formula, which is then applied to DEM data for identifying lunar craters. The method has been tested and verified with DEM data from CE-1 and CE-2, showing strong recognition ability and robustness and is applicable for the recognition of craters with various diameters and significant morphological differences, making fast and accurate automatic crater recognition possible.

  10. Automatic barcode recognition method based on adaptive edge detection and a mapping model

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Chen, Lianzheng; Chen, Yifan; Lee, Yong; Yin, Zhouping

    2016-09-01

    An adaptive edge detection and mapping (AEDM) algorithm to address the challenging one-dimensional barcode recognition task with the existence of both image degradation and barcode shape deformation is presented. AEDM is an edge detection-based method that has three consecutive phases. The first phase extracts the scan lines from a cropped image. The second phase involves detecting the edge points in a scan line. The edge positions are assumed to be the intersecting points between a scan line and a corresponding well-designed reference line. The third phase involves adjusting the preliminary edge positions to more reasonable positions by employing prior information of the coding rules. Thus, a universal edge mapping model is established to obtain the coding positions of each edge in this phase, followed by a decoding procedure. The Levenberg-Marquardt method is utilized to solve this nonlinear model. The computational complexity and convergence analysis of AEDM are also provided. Several experiments were implemented to evaluate the performance of AEDM algorithm. The results indicate that the efficient AEDM algorithm outperforms state-of-the-art methods and adequately addresses multiple issues, such as out-of-focus blur, nonlinear distortion, noise, nonlinear optical illumination, and situations that involve the combinations of these issues.

  11. Automatic detection of the features [high] and [low] in a landmark-based model of speech perception

    NASA Astrophysics Data System (ADS)

    Slifka, Janet

    2004-05-01

    This research is part of a landmark-based approach to modeling speech perception in which sound segments are assumed to be represented as bundles of binary distinctive features. In this model, probability estimates for feature values are derived from measurements of the acoustics in the vicinity of landmarks. The goal of the current project is to automatically detect the features [high] and [low] for vowel segments based on measurements from average spectra. A long-term and a short-term average spectrum are computed using all vowel regions in the utterance and are used to estimate speaker-specific parameters such as average F0 and average F3 (an indicator of vocal tract length). These parameters are used to estimate F1 using a peak-picking process on the average spectrum at each vowel-landmark. Preliminary results are derived from read connected speech for 738 vowels from 80 utterances (two male speakers, two female speakers). Speaker-independent logistic regression analysis using only average F0 and F1 determines the feature [high] with 73% accuracy and the feature [low] with 84% accuracy. Proposals are made for methods to use additional spectral detail to create a more robust estimate for vowels which show significant formant movement. [Work supported by NIH Grant No. DC02978.

  12. Automatic model-based semantic registration of multimodal MRI knee data.

    PubMed

    Xue, Ning; Doellinger, Michael; Fripp, Jurgen; Ho, Charles P; Surowiec, Rachel K; Schwarz, Raphael

    2015-03-01

    To propose a robust and automated model-based semantic registration for the multimodal alignment of the knee bone and cartilage from three-dimensional (3D) MR image data. The movement of the knee joint can be semantically interpreted as a combination of movements of each bone. A semantic registration of the knee joint was implemented by separately reconstructing the rigid movements of the three bones. The proposed method was validated by registering 3D morphological MR datasets of 25 subjects into the corresponding T2 map datasets, and was compared with rigid and elastic methods using two criteria: the spatial overlap of the manually segmented cartilage and the distance between the same landmarks in the reference and target datasets. The mean Dice Similarity Coefficient (DSC) of the overlapped cartilage segmentation was increased to 0.68 ± 0.1 (mean ± SD) and the landmark distance was reduced to 1.3 ± 0.3 mm after the proposed registration method. Both metrics were statistically superior to using rigid (DSC: 0.59 ± 0.12; landmark distance: 2.1 ± 0.4 mm) and elastic (DSC: 0.64 ± 0.11; landmark distance: 1.5 ± 0.5 mm) registrations. The proposed method is an efficient and robust approach for the automated registration between morphological knee datasets and T2 MRI relaxation maps. © 2014 Wiley Periodicals, Inc.

  13. Automatic development of physical-based models of organ systems from magnetic resonance imagery

    NASA Astrophysics Data System (ADS)

    Greenshields, Ian R.; Chun, Junchul; Ramsby, Gale

    1993-09-01

    The essential goal of this work described herein is to provide a biophysical model within which the effects of the alteration of a variety of geometrical or physical variables within the CSF system can be explored. Our ultimate goal is to be able to divorce such models from the constraints of the artificial geometries (e.g., generalized cylinders) so typical of the usual biophysical model, and to this end we have determined that each structure to be modelled be developed from an actual in-vivo example of the structure, determined by extraction from CT or MR imagery. Onto such models we will then overlay a biophysical structure which will permit us to simulate a variety of different conditions and thereby determine (up to model accuracy) how the simulated condition might in fact impact the in vivo structure were it to be faced with a similar set of physical conditions.

  14. Automatic Detection of Student Mental Models Based on Natural Language Student Input during Metacognitive Skill Training

    ERIC Educational Resources Information Center

    Lintean, Mihai; Rus, Vasile; Azevedo, Roger

    2012-01-01

    This article describes the problem of detecting the student mental models, i.e. students' knowledge states, during the self-regulatory activity of prior knowledge activation in MetaTutor, an intelligent tutoring system that teaches students self-regulation skills while learning complex science topics. The article presents several approaches to…

  15. Automatic Detection of Student Mental Models Based on Natural Language Student Input during Metacognitive Skill Training

    ERIC Educational Resources Information Center

    Lintean, Mihai; Rus, Vasile; Azevedo, Roger

    2012-01-01

    This article describes the problem of detecting the student mental models, i.e. students' knowledge states, during the self-regulatory activity of prior knowledge activation in MetaTutor, an intelligent tutoring system that teaches students self-regulation skills while learning complex science topics. The article presents several approaches to…

  16. The Research on Automatic Construction of Domain Model Based on Deep Web Query Interfaces

    NASA Astrophysics Data System (ADS)

    JianPing, Gu

    The integration of services is transparent, meaning that users no longer face the millions of Web services, do not care about the required data stored, but do not need to learn how to obtain these data. In this paper, we analyze the uncertainty of schema matching, and then propose a series of similarity measures. To reduce the cost of execution, we propose the type-based optimization method and schema matching pruning method of numeric data. Based on above analysis, we propose the uncertain schema matching method. The experiments prove the effectiveness and efficiency of our method.

  17. Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3

    DTIC Science & Technology

    2015-12-01

    protocols. We identified limitations and implemented a system that could utilize some of these tools to extract the vocabulary and grammar . We collected 3...sniffer or by specifying an existing capture, network flow, or other accepted formats. • Protocol inference modules: The vocabulary and grammar inference...communication flows. • Simulation module: Netzob utilizes the vocabulary and grammar models previously inferred to understand and generate communication

  18. Automatic target recognition with image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-09-01

    In past decades, the solution to ATR problem has been thought of as a solution to the Pattern Recognition problem. The reasons that Pattern Recognition problem has never been solved successfully and reliably for real-world images are more serious than lack of appropriate ideas. Vision is a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. Vision mechanisms cannot be completely understood apart from the informational processes related to knowledge and intelligence. A reliable solution to the ATR problem is possible only within the solution of a more generic Image Understanding Problem. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise computations of 3-D models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations make possible invariant recognition of a real-world object as exemplar of a class. This allows for creating ATR systems, reliable in field conditions.

  19. Improving CCTA-based lesions' hemodynamic significance assessment by accounting for partial volume modeling in automatic coronary lumen segmentation.

    PubMed

    Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran

    2017-03-01

    The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast

  20. A Cut-Based Procedure For Document-Layout Modelling And Automatic Document Analysis

    NASA Astrophysics Data System (ADS)

    Dengel, Andreas R.

    1989-03-01

    With the growing degree of office automation and the decreasing costs of storage devices, it becomes more and more attractive to store optically scanned documents like letters or reports in an electronic form. Therefore the need of a good paper-computer interface becomes increasingly important. This interface must convert paper documents into an electronic representation that not only captures their contents, but also their layout and logical structure. We propose a procedure to describe the layout of a document page by dividing it recursively into nested rectangular areas. A semantic meaning to each one will be assigned by means of logical labels. The procedure is used as a basis for modelling a hierarchical document layout onto the semantic meaning of the parts in the document. We analyse the layout of a document using a best-first search in this tesselation structure. The search is directed by a measure of similarity between the layout pattern in the model and the layout of the actual document. The validity of a hypothesis for the semantic labelling of a layout block can then be verified. It either supports the hypothesis or initiates the generation of a new one. The method has been implemented in Common Lisp on a SUN 3/60 Workstation and has run for a large population of office docu-ments. The results obtained have been very encouraging and have convincingly confirmed the soundness of the approach.

  1. Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.

    PubMed

    Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo

    2016-09-01

    In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.

  2. Validation of automatic segmentation of ribs for NTCP modeling.

    PubMed

    Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob

    2016-03-01

    Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. A neural network model for the automatic detection and forecast of convective cells based on meteosat second generation data

    NASA Astrophysics Data System (ADS)

    Puca, S.; de Leonibus, L.; Zauli, F.; Rosci, P.; Musmanno, L.

    The Mesoscale Convective Systems (MCSs) are often correlated with heavy rainfall, thunderstorms and hail showers, frequently causing significant damages. The most intensive weather activities occur during the maturing stage of the development, which can be found in the case of a multi-cell storm in the centre of the convective complex systems. These convective systems may occur in several different unstable air mass; in a cold air mass behind a polar cold front, in the frontal zone of a polar front and in warm air ahead of a polar warm front. To understand the meteorological situation and apply the best conceptual model, the knowledge of the convective cluster is often not enough. In many cases the forecasters need to know the distribution of the convective cells in the cloudy cluster. A model, running in operational mode at the Italian Air Force Meteorological Service (UGM/CNMCA), for the automatic detection and forecast of the convective cells, is here proposed. The application relays on the Meteosat Second Generation infrared (IR) windows (10.8 μ m, 7.3 μ m) and the two water vapour (WV) channels (6.2 μ m and 7.3 μ m), giving as output the detection of the convective cells and their evolution for the next 15 and 30 minutes. The format of the output of the product is the last IR (10.8 μ m) image where the detected cells, their development and their tracking are represented. This multispectral method, based on a variable threshold method during the detection phase and a neural network algorithm during the forecast phase, allowed us to define a model able to detect the convective cells present in a convective cluster, plot their distribution and forecast the evolution of them for the next 15 and 30 minutes with a good efficiency. For analysing the performance of the model with the Meteosat Second Generation data, different error functions have been evaluated for various meteorological cloud contexts (i.e. high layer and cirrus clouds). Some methods for

  4. A semi-automatic framework for highway extraction and vehicle detection based on a geometric deformable model

    NASA Astrophysics Data System (ADS)

    Niu, Xutong

    Road extraction and vehicle detection are two of the most important steps of traffic flow analysis from multi-frame aerial photographs. The traditional way of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs. It is tedious and time-consuming work. To improve this process, this research presents a new semi-automatic framework for highway extraction and vehicle detection from aerial photographs. The basis of the new framework is a geometric deformable model. This model refers to the minimization of an objective function that connects the optimization problem with the propagation of regular curves. Utilizing implicit representation of two-dimensional curve, the implementation of this model is capable of dealing with topological changes during curve deformation process and the output is independent of the position of the initial curves. A seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. Manually selected seed points can be automatically propagated throughout a whole highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction and vehicle detection from a large orthophoto mosaic. In this research, vehicles on the extracted highway network were detected with an 83% success rate.

  5. A study of the thermoregulatory characteristics of a liquid-cooled garment with automatic temperature control based on sweat rate: Experimental investigation and biothermal man-model development

    NASA Technical Reports Server (NTRS)

    Chambers, A. B.; Blackaby, J. R.; Miles, J. B.

    1973-01-01

    Experimental results for three subjects walking on a treadmill at exercise rates of up to 590 watts showed that thermal comfort could be maintained in a liquid cooled garment by using an automatic temperature controller based on sweat rate. The addition of head- and neck-cooling to an Apollo type liquid cooled garment increased its effectiveness and resulted in greater subjective comfort. The biothermal model of man developed in the second portion of the study utilized heat rates and exchange coefficients based on the experimental data, and included the cooling provisions of a liquid-cooled garment with automatic temperature control based on sweat rate. Simulation results were good approximations of the experimental results.

  6. Genetic Programming for Automatic Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Chadalawada, Jayashree; Babovic, Vladan

    2017-04-01

    One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach

  7. Automatic enrollment for gait-based person re-identification

    NASA Astrophysics Data System (ADS)

    Ortells, Javier; Martín-Félez, Raúl; Mollineda, Ramón A.

    2015-02-01

    Automatic enrollment involves a critical decision-making process within people re-identification context. However, this process has been traditionally undervalued. This paper studies the problem of automatic person enrollment from a realistic perspective relying on gait analysis. Experiments simulating random flows of people with considerable appearance variations between different observations of a person have been conducted, modeling both short- and longterm scenarios. Promising results based on ROC analysis show that automatically enrolling people by their gait is affordable with high success rates.

  8. Automatic detection of lung nodules in CT datasets based on stable 3D mass-spring models.

    PubMed

    Cascio, D; Magro, R; Fauci, F; Iacomi, M; Raso, G

    2012-11-01

    We propose a computer-aided detection (CAD) system which can detect small-sized (from 3mm) pulmonary nodules in spiral CT scans. A pulmonary nodule is a small lesion in the lungs, round-shaped (parenchymal nodule) or worm-shaped (juxtapleural nodule). Both kinds of lesions have a radio-density greater than lung parenchyma, thus appearing white on the images. Lung nodules might indicate a lung cancer and their early stage detection arguably improves the patient survival rate. CT is considered to be the most accurate imaging modality for nodule detection. However, the large amount of data per examination makes the full analysis difficult, leading to omission of nodules by the radiologist. We developed an advanced computerized method for the automatic detection of internal and juxtapleural nodules on low-dose and thin-slice lung CT scan. This method consists of an initial selection of nodule candidates list, the segmentation of each candidate nodule and the classification of the features computed for each segmented nodule candidate.The presented CAD system is aimed to reduce the number of omissions and to decrease the radiologist scan examination time. Our system locates with the same scheme both internal and juxtapleural nodules. For a correct volume segmentation of the lung parenchyma, the system uses a Region Growing (RG) algorithm and an opening process for including the juxtapleural nodules. The segmentation and the extraction of the suspected nodular lesions from CT images by a lung CAD system constitutes a hard task. In order to solve this key problem, we use a new Stable 3D Mass-Spring Model (MSM) combined with a spline curves reconstruction process. Our model represents concurrently the characteristic gray value range, the directed contour information as well as shape knowledge, which leads to a much more robust and efficient segmentation process. For distinguishing the real nodules among nodule candidates, an additional classification step is applied

  9. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    PubMed

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-09-28

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Biological models for automatic target detection

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce

    2008-04-01

    Humans are better at detecting targets in literal imagery than any known algorithm. Recent advances in modeling visual processes have resulted from f-MRI brain imaging with humans and the use of more invasive techniques with monkeys. There are four startling new discoveries. 1) The visual cortex does not simply process an incoming image. It constructs a physics based model of the image. 2) Coarse category classification and range-to-target are estimated quickly - possibly through the dorsal pathway of the visual cortex, combining rapid coarse processing of image data with expectations and goals. This data is then fed back to lower levels to resize the target and enhance the recognition process feeding forward through the ventral pathway. 3) Giant photosensitive retinal ganglion cells provide data for maintaining circadian rhythm (time-of-day) and modeling the physics of the light source. 4) Five filter types implemented by the neurons of the primary visual cortex have been determined. A computer model for automatic target detection has been developed based upon these recent discoveries. It uses an artificial neural network architecture with multiple feed-forward and feedback paths. Our implementation's efficiency derives from the observation that any 2-D filter kernel can be approximated by a sum of 2-D box functions. And, a 2-D box function easily decomposes into two 1-D box functions. Further efficiency is obtained by decomposing the largest neural filter into a high pass filter and a more sparsely sampled low pass filter.

  11. An automatic composition model of Chinese folk music

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaomei; Li, Dongyang; Wang, Lei; Shen, Lin; Gao, Yanyuan; Zhu, Yuanyuan

    2017-03-01

    The automatic composition has achieved rich results in recent decades, including Western and some other areas of music. However, the automatic composition of Chinese music is less involved. After thousands of years of development, Chinese folk music has a wealth of resources. To design an automatic composition mode, learn the characters of Chinese folk melody and imitate the creative process of music is of some significance. According to the melodic features of Chinese folk music, a Chinese folk music composition based on Markov model is proposed to analyze Chinese traditional music. Folk songs with typical Chinese national characteristics are selected for analysis. In this paper, an example of automatic composition is given. The experimental results show that this composition model can produce music with characteristics of Chinese folk music.

  12. Hidden Markov models in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  13. Conflicts versus analytical redundancy relations: a comparative analysis of the model based diagnosis approach from the artificial intelligence and automatic control perspectives.

    PubMed

    Cordier, Marie-Odile; Dague, Philippe; Lévy, François; Montmain, Jacky; Staroswiecki, Marcel; Travé-Massuyès, Louise

    2004-10-01

    Two distinct and parallel research communities have been working along the lines of the model-based diagnosis approach: the fault detection and isolation (FDI) community and the diagnostic (DX) community that have evolved in the fields of automatic control and artificial intelligence, respectively. This paper clarifies and links the concepts and assumptions that underlie the FDI analytical redundancy approach and the DX consistency-based logical approach. A formal framework is proposed in order to compare the two approaches and the theoretical proof of their equivalence together with the necessary and sufficient conditions is provided.

  14. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  15. Automatic estimation of midline shift in patients with cerebral glioma based on enhanced voigt model and local symmetry.

    PubMed

    Chen, Mingyang; Elazab, Ahmed; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Li, Xiaodong; Hu, Qingmao

    2015-12-01

    Cerebral glioma is one of the most aggressive space-occupying diseases, which will exhibit midline shift (MLS) due to mass effect. MLS has been used as an important feature for evaluating the pathological severity and patients' survival possibility. Automatic quantification of MLS is challenging due to deformation, complex shape and complex grayscale distribution. An automatic method is proposed and validated to estimate MLS in patients with gliomas diagnosed using magnetic resonance imaging (MRI). The deformed midline is approximated by combining mechanical model and local symmetry. An enhanced Voigt model which takes into account the size and spatial information of lesion is devised to predict the deformed midline. A composite local symmetry combining local intensity symmetry and local intensity gradient symmetry is proposed to refine the predicted midline within a local window whose size is determined according to the pinhole camera model. To enhance the MLS accuracy, the axial slice with maximum MSL from each volumetric data has been interpolated from a spatial resolution of 1 mm to 0.33 mm. The proposed method has been validated on 30 publicly available clinical head MRI scans presenting with MLS. It delineates the deformed midline with maximum MLS and yields a mean difference of 0.61 ± 0.27 mm, and average maximum difference of 1.89 ± 1.18 mm from the ground truth. Experiments show that the proposed method will yield better accuracy with the geometric center of pathology being the geometric center of tumor and the pathological region being the whole lesion. It has also been shown that the proposed composite local symmetry achieves significantly higher accuracy than the traditional local intensity symmetry and the local intensity gradient symmetry. To the best of our knowledge, for delineation of deformed midline, this is the first report on both quantification of gliomas and from MRI, which hopefully will provide valuable information for diagnosis

  16. Modelling Pasture-based Automatic Milking System Herds: The Impact of Large Herd on Milk Yield and Economics.

    PubMed

    Islam, M R; Clark, C E F; Garcia, S C; Kerrisk, K L

    2015-07-01

    The aim of this modelling study was to investigate the effect of large herd size (and land areas) on walking distances and milking interval (MI), and their impact on milk yield and economic penalties when 50% of the total diets were provided from home grown feed either as pasture or grazeable complementary forage rotation (CFR) in an automatic milking system (AMS). Twelve scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as 'moderate'; optimum pasture utilisation of 19.7 t DM/ha, termed as 'high') and 2 rates of incorporation of grazeable complementary forage system (CFS: 0, 30%; CFS = 65% farm is CFR and 35% of farm is pasture) were investigated. Walking distances, energy loss due to walking, MI, reduction in milk yield and income loss were calculated for each treatment based on information available in the literature. With moderate pasture utilisation and 0% CFR, increasing the herd size from 400 to 800 cows resulted in an increase in total walking distances between the parlour and the paddock from 3.5 to 6.3 km. Consequently, MI increased from 15.2 to 16.4 h with increased herd size from 400 to 800 cows. High pasture utilisation (allowing for an increased stocking density) reduced the total walking distances up to 1 km, thus reduced the MI by up to 0.5 h compared to the moderate pasture, 800 cow herd combination. The high pasture utilisation combined with 30% of the farm in CFR in the farm reduced the total walking distances by up to 1.7 km and MI by up to 0.8 h compared to the moderate pasture and 800 cow herd combination. For moderate pasture utilisation, increasing the herd size from 400 to 800 cows resulted in more dramatic milk yield penalty as yield increasing from c.f. 2.6 and 5.1 kg/cow/d respectively, which incurred a loss of up to $AU 1.9/cow/d. Milk yield losses of 0.61 kg and 0.25 kg for every km increase in total walking distance (voluntary return

  17. Modelling Pasture-based Automatic Milking System Herds: The Impact of Large Herd on Milk Yield and Economics

    PubMed Central

    Islam, M. R.; Clark, C. E. F.; Garcia, S. C.; Kerrisk, K. L.

    2015-01-01

    The aim of this modelling study was to investigate the effect of large herd size (and land areas) on walking distances and milking interval (MI), and their impact on milk yield and economic penalties when 50% of the total diets were provided from home grown feed either as pasture or grazeable complementary forage rotation (CFR) in an automatic milking system (AMS). Twelve scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as ‘moderate’; optimum pasture utilisation of 19.7 t DM/ha, termed as ‘high’) and 2 rates of incorporation of grazeable complementary forage system (CFS: 0, 30%; CFS = 65% farm is CFR and 35% of farm is pasture) were investigated. Walking distances, energy loss due to walking, MI, reduction in milk yield and income loss were calculated for each treatment based on information available in the literature. With moderate pasture utilisation and 0% CFR, increasing the herd size from 400 to 800 cows resulted in an increase in total walking distances between the parlour and the paddock from 3.5 to 6.3 km. Consequently, MI increased from 15.2 to 16.4 h with increased herd size from 400 to 800 cows. High pasture utilisation (allowing for an increased stocking density) reduced the total walking distances up to 1 km, thus reduced the MI by up to 0.5 h compared to the moderate pasture, 800 cow herd combination. The high pasture utilisation combined with 30% of the farm in CFR in the farm reduced the total walking distances by up to 1.7 km and MI by up to 0.8 h compared to the moderate pasture and 800 cow herd combination. For moderate pasture utilisation, increasing the herd size from 400 to 800 cows resulted in more dramatic milk yield penalty as yield increasing from c.f. 2.6 and 5.1 kg/cow/d respectively, which incurred a loss of up to $AU 1.9/cow/d. Milk yield losses of 0.61 kg and 0.25 kg for every km increase in total walking distance (voluntary

  18. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  19. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  20. Automatic anatomy recognition via fuzzy object models

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Falcão, Alexandre X.; Ciesielski, Krzysztof C.; Miranda, Paulo A. V.; Matsumoto, Monica; Grevera, George J.; Saboury, Babak; Torigian, Drew A.

    2012-02-01

    To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition (AAR) during radiological image reading becomes essential. As part of this larger goal, last year at this conference we presented a fuzzy strategy for building body-wide group-wise anatomic models. In the present paper, we describe the further advances made in fuzzy modeling and the algorithms and results achieved for AAR by using the fuzzy models. The proposed AAR approach consists of three distinct steps: (a) Building fuzzy object models (FOMs) for each population group G. (b) By using the FOMs to recognize the individual objects in any given patient image I under group G. (c) To delineate the recognized objects in I. This paper will focus mostly on (b). FOMs are built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. The hierarchical pose relationships from the parent to offspring are codified in the FOMs. Several approaches are being explored currently, grouped under two strategies, both being hierarchical: (ra1) those using search strategies; (ra2) those strategizing a one-shot approach by which the model pose is directly estimated without searching. Based on 32 patient CT data sets each from the thorax and abdomen and 25 objects modeled, our analysis indicates that objects do not all scale uniformly with patient size. Even the simplest among the (ra2) strategies of recognizing the root object and then placing all other descendants as per the learned parent-to-offspring pose relationship bring the models on an average within about 18 mm of the true locations.

  1. Automatic mathematical modeling for real time simulation program (AI application)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1989-01-01

    A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.

  2. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    PubMed

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-07

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  3. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  4. Automatic Reading Technique of Analog Meter Using Automatic Correctionof Measurement Basing Point

    NASA Astrophysics Data System (ADS)

    Itoh, Norihiko; Shimokawa, Hiroshi

    In electric power facilities, a lot of analog meters without data transfer function are used. Field engineers have to read them manually in their daily work. In order to support the field engineers, this paper proposes an automatic reading technique of analog meters. Two features of this technology are automatic basing point compensation and automatic compensation of difference in position of the meters.

  5. Automatic Voice Pathology Detection With Running Speech by Using Estimation of Auditory Spectrum and Cepstral Coefficients Based on the All-Pole Model.

    PubMed

    Ali, Zulfiqar; Elamvazuthi, Irraivan; Alsulaiman, Mansour; Muhammad, Ghulam

    2016-11-01

    Automatic voice pathology detection using sustained vowels has been widely explored. Because of the stationary nature of the speech waveform, pathology detection with a sustained vowel is a comparatively easier task than that using a running speech. Some disorder detection systems with running speech have also been developed, although most of them are based on a voice activity detection (VAD), that is, itself a challenging task. Pathology detection with running speech needs more investigation, and systems with good accuracy (ACC) are required. Furthermore, pathology classification systems with running speech have not received any attention from the research community. In this article, automatic pathology detection and classification systems are developed using text-dependent running speech without adding a VAD module. A set of three psychophysics conditions of hearing (critical band spectral estimation, equal loudness hearing curve, and the intensity loudness power law of hearing) is used to estimate the auditory spectrum. The auditory spectrum and all-pole models of the auditory spectrums are computed and analyzed and used in a Gaussian mixture model for an automatic decision. In the experiments using the Massachusetts Eye & Ear Infirmary database, an ACC of 99.56% is obtained for pathology detection, and an ACC of 93.33% is obtained for the pathology classification system. The results of the proposed systems outperform the existing running-speech-based systems. The developed system can effectively be used in voice pathology detection and classification systems, and the proposed features can visually differentiate between normal and pathological samples. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Embedded knowledge-based system for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Aboutalib, A. O.

    1990-10-01

    The development of a reliable Automatic Target Recognition (ATE) system is considered a very critical and challenging problem. Existing ATE Systems have inherent limitations in terms of recognition performance and the ability to learn and adapt. Artificial Intelligence Techniques have the potential to improve the performance of ATh Systems. In this paper, we presented a novel Knowledge-Engineering tool, termed, the Automatic Reasoning Process (ARP) , that can be used to automatically develop and maintain a Knowledge-Base (K-B) for the ATR Systems. In its learning mode, the ARP utilizes Learning samples to automatically develop the ATR K-B, which consists of minimum size sets of necessary and sufficient conditions for each target class. In its operational mode, the ARP infers the target class from sensor data using the ATh K-B System. The ARP also has the capability to reason under uncertainty, and can support both statistical and model-based approaches for ATR development. The capabilities of the ARP are compared and contrasted to those of another Knowledge-Engineering tool, termed, the Automatic Rule Induction (ARI) which is based on maximizing the mutual information. The AR? has been implemented in LISP on a VAX-GPX workstation.

  7. An information processing model of anxiety: automatic and strategic processes.

    PubMed

    Beck, A T; Clark, D A

    1997-01-01

    A three-stage schema-based information processing model of anxiety is described that involves: (a) the initial registration of a threat stimulus; (b) the activation of a primal threat mode; and (c) the secondary activation of more elaborative and reflective modes of thinking. The defining elements of automatic and strategic processing are discussed with the cognitive bias in anxiety reconceptualized in terms of a mixture of automatic and strategic processing characteristics depending on which stage of the information processing model is under consideration. The goal in the treatment of anxiety is to deactivate the more automatic primal threat mode and to strengthen more constructive reflective modes of thinking. Arguments are presented for the inclusion of verbal mediation as a necessary but not sufficient component in the cognitive and behavioral treatment of anxiety.

  8. Roads Centre-Axis Extraction in Airborne SAR Images: AN Approach Based on Active Contour Model with the Use of Semi-Automatic Seeding

    NASA Astrophysics Data System (ADS)

    Lotte, R. G.; Sant'Anna, S. J. S.; Almeida, C. M.

    2013-05-01

    Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar) imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR) airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM), followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes). The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  9. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  10. Automatic draft reading based on image processing

    NASA Astrophysics Data System (ADS)

    Tsujii, Takahiro; Yoshida, Hiromi; Iiguni, Youji

    2016-10-01

    In marine transportation, a draft survey is a means to determine the quantity of bulk cargo. Automatic draft reading based on computer image processing has been proposed. However, the conventional draft mark segmentation may fail when the video sequence has many other regions than draft marks and a hull, and the estimated waterline is inherently higher than the true one. To solve these problems, we propose an automatic draft reading method that uses morphological operations to detect draft marks and estimate the waterline for every frame with Canny edge detection and a robust estimation. Moreover, we emulate surveyors' draft reading process for getting the understanding of a shipper and a receiver. In an experiment in a towing tank, the draft reading error of the proposed method was <1 cm, showing the advantage of the proposed method. It is also shown that accurate draft reading has been achieved in a real-world scene.

  11. Feature based volume decomposition for automatic hexahedral mesh generation

    SciTech Connect

    LU,YONG; GADH,RAJIT; TAUTGES,TIMOTHY J.

    2000-02-21

    Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

  12. Production ready feature recognition based automatic group technology part coding

    SciTech Connect

    Ames, A.L.

    1990-01-01

    During the past four years, a feature recognition based expert system for automatically performing group technology part coding from solid model data has been under development. The system has become a production quality tool, capable of quickly the geometry based portions of a part code with no human intervention. It has been tested on over 200 solid models, half of which are models of production Sandia designs. Its performance rivals that of humans performing the same task, often surpassing them in speed and uniformity. The feature recognition capability developed for part coding is being extended to support other applications, such as manufacturability analysis, automatic decomposition (for finite element meshing and machining), and assembly planning. Initial surveys of these applications indicate that the current capability will provide a strong basis for other applications and that extensions toward more global geometric reasoning and tighter coupling with solid modeler functionality will be necessary.

  13. Using automatic programming for simulating reliability network models

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  14. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  15. Automatic GPS satellite based subsidence measurements for Ekofisk

    SciTech Connect

    Mes, M.J.; Luttenberger, C.; Landau, H.; Gustavsen, K.

    1995-12-01

    A fully automatic procedure for the measurement of subsidence of many platforms in almost real time is presented. Such a method is important for developments which may be subject to subsidence and where reliable subsidence and rate measurements are needed for safety, planning of remedial work and verification of subsidence models. Automatic GPS satellite based subsidence measurements are made continuously on platforms in the North Sea Ekofisk Field area. A description of the system is given. The derivation of those parameters which give optimal measurement accuracy is described, the results of these derivations are provided. GPS satellite based measurements are equivalent to pressure gauge based platform subsidence measurements, but they are much cheaper to install and maintain. In addition, GPS based measurements are not subject to drift of any gauges. GPS measurements were coupled to oceanographic quantities such as the platform deck clearance. These quantities now follow from GPS based measurements.

  16. A new method of automatic processing of seismic waves: waveform modeling by using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Kodera, Y.; Sakai, S.

    2012-12-01

    Development of a method of automatic processing of seismic waves is needed since there are limitations to manually picking out earthquake events from seismograms. However, there is no practical method to automatically detect arrival times of P and S waves in seismograms. One typical example of previously proposed methods is automatic detection by using AR model (e.g. Kitagawa et al., 2004). This method seems not to be effective for seismograms contaminated with spike noise, because it cannot distinguish non-stationary signals generated by earthquakes from those generated by noise. The difficulty of distinguishing the signals is caused by the fact that the automatic detection system has a lack of information on time series variation of seismic waves. We expect that an automatic detection system that includes the information on seismic waves is more effective for seismograms contaminated with noise. So we try to adapt Hidden Markov Model (HMM) to construct seismic wave models and establish a new automatic detection method. HMM has been widely used in many fields such as voice recognition (e.g. Bishop, 2006). With the use of HMM, P- or S-waveform models that include envelops can be constructed directly and semi-automatically from lots of observed waveform data of P or S waves. These waveform models are expected to become more robust if the quantity of observation data increases. We have constructed seismic wave models based on HMM from seismograms observed in Ashio, Japan. By using these models, we have tried automatic detection of arrival times of earthquake events in Ashio. Results show that automatic detection based on HMM is more effective for seismograms contaminated with noise than that based on AR model.

  17. MATURE: A Model Driven bAsed Tool to Automatically Generate a langUage That suppoRts CMMI Process Areas spEcification

    NASA Astrophysics Data System (ADS)

    Musat, David; Castaño, Víctor; Calvo-Manzano, Jose A.; Garbajosa, Juan

    Many companies have achieved a higher quality in their processes by using CMMI. Process definition may be efficiently supported by software tools. A higher automation level will make process improvement and assessment activities easier to be adapted to customer needs. At present, automation of CMMI is based on tools that support practice definition in a textual way. These tools are often enhanced spreadsheets. In this paper, following the Model Driven Development paradigm (MDD), a tool that supports automatic generation of a language that can be used to specify process areas practices is presented. The generation is performed from a metamodel that represents CMMI. This tool, differently from others available, can be customized according to user needs. Guidelines to specify the CMMI metamodel are also provided. The paper also shows how this approach can support other assessment methods.

  18. Vision-based industrial automatic vehicle classifier

    NASA Astrophysics Data System (ADS)

    Khanipov, Timur; Koptelov, Ivan; Grigoryev, Anton; Kuznetsova, Elena; Nikolaev, Dmitry

    2015-02-01

    The paper describes the automatic motor vehicle video stream based classification system. The system determines vehicle type at payment collection plazas on toll roads. Classification is performed in accordance with a preconfigured set of rules which determine type by number of wheel axles, vehicle length, height over the first axle and full height. These characteristics are calculated using various computer vision algorithms: contour detectors, correlational analysis, fast Hough transform, Viola-Jones detectors, connected components analysis, elliptic shapes detectors and others. Input data contains video streams and induction loop signals. Output signals are vehicle enter and exit events, vehicle type, motion direction, speed and the above mentioned features.

  19. Automatic identification of fault surfaces through Object Based Image Analysis of a Digital Elevation Model in the submarine area of the North Aegean Basin

    NASA Astrophysics Data System (ADS)

    Argyropoulou, Evangelia

    2015-04-01

    The current study was focused on the seafloor morphology of the North Aegean Basin in Greece, through Object Based Image Analysis (OBIA) using a Digital Elevation Model. The goal was the automatic extraction of morphologic and morphotectonic features, resulting into fault surface extraction. An Object Based Image Analysis approach was developed based on the bathymetric data and the extracted features, based on morphological criteria, were compared with the corresponding landforms derived through tectonic analysis. A digital elevation model of 150 meters spatial resolution was used. At first, slope, profile curvature, and percentile were extracted from this bathymetry grid. The OBIA approach was developed within the eCognition environment. Four segmentation levels were created having as a target "level 4". At level 4, the final classes of geomorphological features were classified: discontinuities, fault-like features and fault surfaces. On previous levels, additional landforms were also classified, such as continental platform and continental slope. The results of the developed approach were evaluated by two methods. At first, classification stability measures were computed within eCognition. Then, qualitative and quantitative comparison of the results took place with a reference tectonic map which has been created manually based on the analysis of seismic profiles. The results of this comparison were satisfactory, a fact which determines the correctness of the developed OBIA approach.

  20. Nonlinear spectro-temporal features based on a cochlear model for automatic speech recognition in a noisy situation.

    PubMed

    Choi, Yong-Sun; Lee, Soo-Young

    2013-09-01

    A nonlinear speech feature extraction algorithm was developed by modeling human cochlear functions, and demonstrated as a noise-robust front-end for speech recognition systems. The algorithm was based on a model of the Organ of Corti in the human cochlea with such features as such as basilar membrane (BM), outer hair cells (OHCs), and inner hair cells (IHCs). Frequency-dependent nonlinear compression and amplification of OHCs were modeled by lateral inhibition to enhance spectral contrasts. In particular, the compression coefficients had frequency dependency based on the psychoacoustic evidence. Spectral subtraction and temporal adaptation were applied in the time-frame domain. With long-term and short-term adaptation characteristics, these factors remove stationary or slowly varying components and amplify the temporal changes such as onset or offset. The proposed features were evaluated with a noisy speech database and showed better performance than the baseline methods such as mel-frequency cepstral coefficients (MFCCs) and RASTA-PLP in unknown noisy conditions.

  1. Automatization of hydrodynamic modelling in a Floreon+ system

    NASA Astrophysics Data System (ADS)

    Ronovsky, Ales; Kuchar, Stepan; Podhoranyi, Michal; Vojtek, David

    2017-07-01

    The paper describes fully automatized hydrodynamic modelling as a part of the Floreon+ system. The main purpose of hydrodynamic modelling in the disaster management is to provide an accurate overview of the hydrological situation in a given river catchment. Automatization of the process as a web service could provide us with immediate data based on extreme weather conditions, such as heavy rainfall, without the intervention of an expert. Such a service can be used by non scientific users such as fire-fighter operators or representatives of a military service organizing evacuation during floods or river dam breaks. The paper describes the whole process beginning with a definition of a schematization necessary for hydrodynamic model, gathering of necessary data and its processing for a simulation, the model itself and post processing of a result and visualization on a web service. The process is demonstrated on a real data collected during floods in our Moravian-Silesian region in 2010.

  2. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  3. Case-based synthesis in automatic advertising creation system

    NASA Astrophysics Data System (ADS)

    Zhuang, Yueting; Pan, Yunhe

    1995-08-01

    Advertising (ads) is an important design area. Though many interactive ad-design softwares have come into commercial use, none of them ever support the intelligent work -- automatic ad creation. The potential for this is enormous. This paper gives a description of our current work in research of an automatic advertising creation system (AACS). After careful analysis of the mental behavior of a human ad designer, we conclude that case-based approach is appropriate to its intelligent modeling. A model for AACS is given in the paper. A case in ads is described as two parts: the creation process and the configuration of the ads picture, with detailed data structures given in the paper. Along with the case representation, we put forward an algorithm. Some issues such as similarity measure computing, and case adaptation have also been discussed.

  4. Digital movie-based on automatic titrations.

    PubMed

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Automatic paper sliceform design from 3D solid models.

    PubMed

    Le-Nguyen, Tuong-Vu; Low, Kok-Lim; Ruiz, Conrado; Le, Sang N

    2013-11-01

    A paper sliceform or lattice-style pop-up is a form of papercraft that uses two sets of parallel paper patches slotted together to make a foldable structure. The structure can be folded flat, as well as fully opened (popped-up) to make the two sets of patches orthogonal to each other. Automatic design of paper sliceforms is still not supported by existing computational models and remains a challenge. We propose novel geometric formulations of valid paper sliceform designs that consider the stability, flat-foldability and physical realizability of the designs. Based on a set of sufficient construction conditions, we also present an automatic algorithm for generating valid sliceform designs that closely depict the given 3D solid models. By approximating the input models using a set of generalized cylinders, our method significantly reduces the search space for stable and flat-foldable sliceforms. To ensure the physical realizability of the designs, the algorithm automatically generates slots or slits on the patches such that no two cycles embedded in two different patches are interlocking each other. This guarantees local pairwise assembility between patches, which is empirically shown to lead to global assembility. Our method has been demonstrated on a number of example models, and the output designs have been successfully made into real paper sliceforms.

  6. Assessing model accuracy using the homology modeling automatically software.

    PubMed

    Bhattacharya, Aneerban; Wunderlich, Zeba; Monleon, Daniel; Tejero, Roberto; Montelione, Gaetano T

    2008-01-01

    Homology modeling is a powerful technique that greatly increases the value of experimental structure determination by using the structural information of one protein to predict the structures of homologous proteins. We have previously described a method of homology modeling by satisfaction of spatial restraints (Li et al., Protein Sci 1997;6:956-970). The Homology Modeling Automatically (HOMA) web site, , is a new tool, using this method to predict 3D structure of a target protein based on the sequence alignment of the target protein to a template protein and the structure coordinates of the template. The user is presented with the resulting models, together with an extensive structure validation report providing critical assessments of the quality of the resulting homology models. The homology modeling method employed by HOMA was assessed and validated using twenty-four groups of homologous proteins. Using HOMA, homology models were generated for 510 proteins, including 264 proteins modeled with correct folds and 246 modeled with incorrect folds. Accuracies of these models were assessed by superimposition on the corresponding experimentally determined structures. A subset of these results was compared with parallel studies of modeling accuracy using several other automated homology modeling approaches. Overall, HOMA provides prediction accuracies similar to other state-of-the-art homology modeling methods. We also provide an evaluation of several structure quality validation tools in assessing the accuracy of homology models generated with HOMA. This study demonstrates that Verify3D (Luthy et al., Nature 1992;356:83-85) and ProsaII (Sippl, Proteins 1993;17:355-362) are most sensitive in distinguishing between homology models with correct or incorrect folds. For homology models that have the correct fold, the steric conformational energy (including primarily the Van der Waals energy), MolProbity clashscore (Word et al., Protein

  7. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit

  8. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially ‘concentrating’ feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  9. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances.

    PubMed

    Islam, M R; Garcia, S C; Clark, C E F; Kerrisk, K L

    2015-06-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially 'concentrating' feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  10. Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications to Automatic Target Recognition

    DTIC Science & Technology

    1999-05-01

    experiment was repeated 1000=T times using independent MonteCarlo simulations. A. Uniform Target Test Set. For each Nk ,...,2,1= a model kM...correct identification (PID) and probability of false alarm (PFA) over T=1000 MonteCarlo realizations of the experiment data. The false alarm rate is

  11. Connecting Lines of Research on Task Model Variables, Automatic Item Generation, and Learning Progressions in Game-Based Assessment

    ERIC Educational Resources Information Center

    Graf, Edith Aurora

    2014-01-01

    In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…

  12. Connecting Lines of Research on Task Model Variables, Automatic Item Generation, and Learning Progressions in Game-Based Assessment

    ERIC Educational Resources Information Center

    Graf, Edith Aurora

    2014-01-01

    In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…

  13. Applying Hierarchical Model Calibration to Automatically Generated Items.

    ERIC Educational Resources Information Center

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  14. Automatic Speech Recognition Based on Electromyographic Biosignals

    NASA Astrophysics Data System (ADS)

    Jou, Szu-Chen Stan; Schultz, Tanja

    This paper presents our studies of automatic speech recognition based on electromyographic biosignals captured from the articulatory muscles in the face using surface electrodes. We develop a phone-based speech recognizer and describe how the performance of this recognizer improves by carefully designing and tailoring the extraction of relevant speech feature toward electromyographic signals. Our experimental design includes the collection of audibly spoken speech simultaneously recorded as acoustic data using a close-speaking microphone and as electromyographic signals using electrodes. Our experiments indicate that electromyographic signals precede the acoustic signal by about 0.05-0.06 seconds. Furthermore, we introduce articulatory feature classifiers, which had recently shown to improved classical speech recognition significantly. We describe that the classification accuracy of articulatory features clearly benefits from the tailored feature extraction. Finally, these classifiers are integrated into the overall decoding framework applying a stream architecture. Our final system achieves a word error rate of 29.9% on a 100-word recognition task.

  15. A Computational Model for the Automatic Diagnosis of Attention Deficit Hyperactivity Disorder Based on Functional Brain Volume.

    PubMed

    Tan, Lirong; Guo, Xinyu; Ren, Sheng; Epstein, Jeff N; Lu, Long J

    2017-01-01

    In this paper, we investigated the problem of computer-aided diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) using machine learning techniques. With the ADHD-200 dataset, we developed a Support Vector Machine (SVM) model to classify ADHD patients from typically developing controls (TDCs), using the regional brain volumes as predictors. Conventionally, the volume of a brain region was considered to be an anatomical feature and quantified using structural magnetic resonance images. One major contribution of the present study was that we had initially proposed to measure the regional brain volumes using fMRI images. Brain volumes measured from fMRI images were denoted as functional volumes, which quantified the volumes of brain regions that were actually functioning during fMRI imaging. We compared the predictive power of functional volumes with that of regional brain volumes measured from anatomical images, which were denoted as anatomical volumes. The former demonstrated higher discriminative power than the latter for the classification of ADHD patients vs. TDCs. Combined with our two-step feature selection approach which integrated prior knowledge with the recursive feature elimination (RFE) algorithm, our SVM classification model combining functional volumes and demographic characteristics achieved a balanced accuracy of 67.7%, which was 16.1% higher than that of a relevant model published previously in the work of Sato et al. Furthermore, our classifier highlighted 10 brain regions that were most discriminative in distinguishing between ADHD patients and TDCs. These 10 regions were mainly located in occipital lobe, cerebellum posterior lobe, parietal lobe, frontal lobe, and temporal lobe. Our present study using functional images will likely provide new perspectives about the brain regions affected by ADHD.

  16. [Research on automatic external defibrillator based on DSP].

    PubMed

    Jing, Jun; Ding, Jingyan; Zhang, Wei; Hong, Wenxue

    2012-10-01

    Electrical defibrillation is the most effective way to treat the ventricular tachycardia (VT) and ventricular fibrillation (VF). An automatic external defibrillator based on DSP is introduced in this paper. The whole design consists of the signal collection module, the microprocessor controlingl module, the display module, the defibrillation module and the automatic recognition algorithm for VF and non VF, etc. This automatic external defibrillator has achieved goals such as ECG signal real-time acquisition, ECG wave synchronous display, data delivering to U disk and automatic defibrillate when shockable rhythm appears, etc.

  17. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  18. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  19. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  20. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  1. Thesaurus-Based Automatic Book Indexing.

    ERIC Educational Resources Information Center

    Dillon, Martin

    1982-01-01

    Describes technique for automatic book indexing requiring dictionary of terms with text strings that count as instances of term and text in form suitable for processing by text formatter. Results of experimental application to portion of book text are presented, including measures of precision and recall. Ten references are noted. (EJS)

  2. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Astrophysics Data System (ADS)

    Morgan, Steve

    1992-09-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  3. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Technical Reports Server (NTRS)

    Morgan, Steve

    1992-01-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  4. Geometrical and topological issues in octree based automatic meshing

    NASA Technical Reports Server (NTRS)

    Saxena, Mukul; Perucchio, Renato

    1987-01-01

    Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is discussed. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary representation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractor. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.

  5. LADAR And FLIR Based Sensor Fusion For Automatic Target Classification

    NASA Astrophysics Data System (ADS)

    Selzer, Fred; Gutfinger, Dan

    1989-01-01

    The purpose of this report is to show results of automatic target classification and sensor fusion for forward looking infrared (FLIR) and Laser Radar sensors. The sensor fusion data base was acquired from the Naval Weapon Center and it consists of coregistered Laser RaDAR (range and reflectance image), FLIR (raw and preprocessed image) and TV. Using this data base we have developed techniques to extract relevant object edges from the FLIR and LADAR which are correlated to wireframe models. The resulting correlation coefficients from both the LADAR and FLIR are fused using either the Bayesian or the Dempster-Shafer combination method so as to provide a higher confidence target classifica-tion level output. Finally, to minimize the correlation process the wireframe models are modified to reflect target range (size of target) and target orientation which is extracted from the LADAR reflectance image.

  6. 11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND BUILT BY WES. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  7. Automatic Building Information Model Query Generation

    SciTech Connect

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  8. Automatic building information model query generation

    SciTech Connect

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. In conclusion, by demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  9. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  10. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  11. Automatic seizure detection based on star graph topological indices.

    PubMed

    Fernandez-Blanco, Enrique; Rivero, Daniel; Rabuñal, Juan; Dorado, Julián; Pazos, Alejandro; Munteanu, Cristian Robert

    2012-08-15

    The recognition of seizures is very important for the diagnosis of patients with epilepsy. The seizure is a process of rhythmic discharge in brain and occurs rarely and unpredictably. This behavior generates a need of an automatic detection of seizures by using the signals of long-term electroencephalographic (EEG) recordings. Due to the non-stationary character of EEG signals, the conventional methods of frequency analysis are not the best alternative to obtain good results in diagnostic purpose. The present work proposes a method of EEG signal analysis based on star graph topological indices (SGTIs) for the first time. The signal information, such as amplitude and time occurrence, is codified into invariant SGTIs which are the basis for the classification models that can discriminate the epileptic EEG records from the non-epileptic ones. The method with SGTIs and the simplest linear discriminant methods provide similar results to those previously published, which are based on the time-frequency analysis and artificial neural networks. Thus, this work proposes a simpler and faster alternative for automatic detection of seizures from the EEG recordings. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Image analysis techniques associated with automatic data base generation.

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  13. Graphical models and automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Bilmes, Jeff A.

    2002-11-01

    Graphical models (GMs) are a flexible statistical abstraction that has been successfully used to describe problems in a variety of different domains. Commonly used for ASR, hidden Markov models are only one example of the large space of models constituting GMs. Therefore, GMs are useful to understand existing ASR approaches and also offer a promising path towards novel techniques. In this work, several such ways are described including (1) using both directed and undirected GMs to represent sparse Gaussian and conditional Gaussian distributions, (2) GMs for representing information fusion and classifier combination, (3) GMs for representing hidden articulatory information in a speech signal, (4) structural discriminability where the graph structure itself is discriminative, and the difficulties that arise when learning discriminative structure (5) switching graph structures, where the graph may change dynamically, and (6) language modeling. The graphical model toolkit (GMTK), a software system for general graphical-model based speech recognition and time series analysis, will also be described, including a number of GMTK's features that are specifically geared to ASR.

  14. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  15. Performance models for automatic evaluation of virtual scanning keyboards.

    PubMed

    Bhattacharya, Samit; Samanta, Debasis; Basu, Anupam

    2008-10-01

    Virtual scanning keyboards are commonly used augmentative communication aids by persons with severe speech and motion impairments. Designers of virtual scanning keyboards face problems in evaluating alternate designs and hence in choosing the better design among alternatives. Automatic evaluation of designs will be helpful to designers in making the appropriate design choice. In this paper, we present performance models for virtual scanning keyboards that can be used for automatic evaluation. The proposed models address the limitations present in the reported work on similar models. We compared the model predictions with results from user trials and established the validity of the proposed models.

  16. Joint sparse representation based automatic target recognition in SAR images

    NASA Astrophysics Data System (ADS)

    Zhang, Haichao; Nasrabadi, Nasser M.; Huang, Thomas S.; Zhang, Yanning

    2011-06-01

    In this paper, we introduce a novel joint sparse representation based automatic target recognition (ATR) method using multiple views, which can not only handle multi-view ATR without knowing the pose but also has the advantage of exploiting the correlations among the multiple views for a single joint recognition decision. We cast the problem as a multi-variate regression model and recover the sparse representations for the multiple views simultaneously. The recognition is accomplished via classifying the target to the class which gives the minimum total reconstruction error accumulated across all the views. Extensive experiments have been carried out on Moving and Stationary Target Acquisition and Recognition (MSTAR) public database to evaluate the proposed method compared with several state-of-the-art methods such as linear Support Vector Machine (SVM), kernel SVM as well as a sparse representation based classifier. Experimental results demonstrate that the effectiveness as well as robustness of the proposed joint sparse representation ATR method.

  17. a Sensor Based Automatic Ovulation Prediction System for Dairy Cows

    NASA Astrophysics Data System (ADS)

    Mottram, Toby; Hart, John; Pemberton, Roy

    2000-12-01

    Sensor scientists have been successful in developing detectors for tiny concentrations of rare compounds, but the work is rarely applied in practice. Any but the most trivial application of sensors requires a specification that should include a sampling system, a sensor, a calibration system and a model of how the information is to be used to control the process of interest. The specification of the sensor system should ask the following questions. How will the material to be analysed be sampled? What decision can be made with the information available from a proposed sensor? This project provides a model of a systems approach to the implementation of automatic ovulation prediction in dairy cows. A healthy well managed dairy cow should calve every year to make the best use of forage. As most cows are inseminated artificially it is of vital importance mat cows are regularly monitored for signs of oestrus. The pressure on dairymen to manage more cows often leads to less time being available for observation of cows to detect oestrus. This, together with breeding and feeding for increased yields, has led to a reduction in reproductive performance. In the UK the typical dairy farmer could save € 12800 per year if ovulation could be predicted accurately. Research over a number of years has shown that regular analysis of milk samples with tests based on enzyme linked immunoassay (ELISA) can map the ovulation cycle. However, these tests require the farmer to implement a manually operated sampling and analysis procedure and the technique has not been widely taken up. The best potential method of achieving 98% specificity of prediction of ovulation is to adapt biosensor techniques to emulate the ELISA tests automatically in the milking system. An automated ovulation prediction system for dairy cows is specified. The system integrates a biosensor with automatic milk sampling and a herd management database. The biosensor is a screen printed carbon electrode system capable of

  18. A Robot Based Automatic Paint Inspection System

    NASA Astrophysics Data System (ADS)

    Atkinson, R. M.; Claridge, J. F.

    1988-06-01

    The final inspection of manufactured goods is a labour intensive activity. The use of human inspectors has a number of potential disadvantages; it can be expensive, the inspection standard applied is subjective and the inspection process can be slow compared with the production process. The use of automatic optical and electronic systems to perform the inspection task is now a growing practice but, in general, such systems have been applied to small components which are accurately presented. Recent advances in vision systems and robot control technology have made possible the installation of an automated paint inspection system at the Austin Rover Group's plant at Cowley, Oxford. The automatic inspection of painted car bodies is a particularly difficult problem, but one which has major benefits. The pass line of the car bodies is ill-determined, the surface to be inspected is of varying surface geometry and only a short time is available to inspect a large surface area. The benefits, however, are due to the consistent standard of inspection which should lead to lower levels of customer complaints and improved process feedback. The Austin Rover Group initiated the development of a system to fulfil this requirement. Three companies collaborated on the project; Austin Rover itself undertook the production line modifications required for body presentation, Sira Ltd developed the inspection cameras and signal processing system and Unimation (Europe) Ltd designed, supplied and programmed the robot system. Sira's development was supported by a grant from the Department of Trade and Industry.

  19. Automatic building information model query generation

    DOE PAGES

    Jiang, Yufei; Yu, Nan; Ming, Jiang; ...

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approachmore » to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. In conclusion, by demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.« less

  20. Kernel for modular robot applications: Automatic modeling techniques

    SciTech Connect

    Chen, I.M.; Yeo, S.H.; Chen, G.; Yang, G.

    1999-02-01

    A modular robotic system consists of standardized joint and link units that an be assembled into various kinematic configurations for different types of tasks. For the control and simulation of such a system, manual derivation of the kinematic and dynamic models, as well as the error model for kinematic calibration, require tremendous effort, because the models constantly change as the robot geometry is altered after module reconfiguration. This paper presents a frame-work to facilitate the model-generation procedure for the control and simulation of the modular robot system. A graph technique, termed kinematic graphs and realized through assembly incidence matrices (AIM), is introduced to represent the module-assembly sequence and robot geometry. The kinematics and dynamics are formulated based on a local representation of the theory of lie groups and Lie algebras. The automatic model-generation procedure starts with a given assembly graph of the modular robot. Kinematic, dynamic, and error models of the robot are then established, based on the local representations and iterative graph-traversing algorithms. This approach can be applied to a modular robot with both serial and branch-type geometries, and arbitrary degrees of freedom. Furthermore, the AIM of the robot naturally leads to solving the task-oriented optimal configuration problem in modular robots. There is no need to maintain a huge library of robot models, and the footprint of the overall software system can be reduced.

  1. Automatic activity estimation based on object behaviour signature

    NASA Astrophysics Data System (ADS)

    Martínez-Pérez, F. E.; González-Fraga, J. A.; Tentori, M.

    2010-08-01

    Automatic estimation of human activities is a topic widely studied. However the process becomes difficult when we want to estimate activities from a video stream, because human activities are dynamic and complex. Furthermore, we have to take into account the amount of information that images provide, since it makes the modelling and estimation activities a hard work. In this paper we propose a method for activity estimation based on object behavior. Objects are located in a delimited observation area and their handling is recorded with a video camera. Activity estimation can be done automatically by analyzing the video sequences. The proposed method is called "signature recognition" because it considers a space-time signature of the behaviour of objects that are used in particular activities (e.g. patients' care in a healthcare environment for elder people with restricted mobility). A pulse is produced when an object appears in or disappears of the observation area. This means there is a change from zero to one or vice versa. These changes are produced by the identification of the objects with a bank of nonlinear correlation filters. Each object is processed independently and produces its own pulses; hence we are able to recognize several objects with different patterns at the same time. The method is applied to estimate three healthcare-related activities of elder people with restricted mobility.

  2. Incremental logistic regression for customizing automatic diagnostic models.

    PubMed

    Tortajada, Salvador; Robles, Montserrat; García-Gómez, Juan Miguel

    2015-01-01

    In the last decades, and following the new trends in medicine, statistical learning techniques have been used for developing automatic diagnostic models for aiding the clinical experts throughout the use of Clinical Decision Support Systems. The development of these models requires a large, representative amount of data, which is commonly obtained from one hospital or a group of hospitals after an expensive and time-consuming gathering, preprocess, and validation of cases. After the model development, it has to overcome an external validation that is often carried out in a different hospital or health center. The experience is that the models show underperformed expectations. Furthermore, patient data needs ethical approval and patient consent to send and store data. For these reasons, we introduce an incremental learning algorithm base on the Bayesian inference approach that may allow us to build an initial model with a smaller number of cases and update it incrementally when new data are collected or even perform a new calibration of a model from a different center by using a reduced number of cases. The performance of our algorithm is demonstrated by employing different benchmark datasets and a real brain tumor dataset; and we compare its performance to a previous incremental algorithm and a non-incremental Bayesian model, showing that the algorithm is independent of the data model, iterative, and has a good convergence.

  3. An Enterprise Ontology Building the Bases for Automatic Metadata Generation

    NASA Astrophysics Data System (ADS)

    Thönssen, Barbara

    'Information Overload' or 'Document Deluge' is a problem enterprises and Public Administrations alike are still dealing with. Although commercial products for Enterprise Content or Records Management are available since more than two decades, especially in Small and Medium Enterprises and Public Administrations they didn't get through. Because of the wide range of document types and formats full-text indexing is not sufficient, but assigning metadata manually is not possible. Thus, automatic, format-independent generation of metadata for (public) enterprise documents is needed. Using context to infer metadata automatically has been researched for example for web-documents or learning objects. If (public) enterprise objects were modelled 'machine understandable' they could be build the context for automatic metadata generation. The approach introduced in this paper is to model context (the (public) enterprise objects) in an ontology and using that ontology to infer content-related metadata.

  4. Using suggestion to model different types of automatic writing.

    PubMed

    Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q

    2014-05-01

    Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled. Copyright © 2014. Published by Elsevier Inc.

  5. Automatic 3D virtual scenes modeling for multisensors simulation

    NASA Astrophysics Data System (ADS)

    Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu

    2006-05-01

    SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.

  6. Analysis of Automatic Automotive Gear Boxes by Means of Versatile Graph-Based Methods

    NASA Astrophysics Data System (ADS)

    Drewniak, J.; Kopeć, J.; Zawiślak, S.

    Automotive gear boxes are special mechanisms which are created based upon some planetary gears and additionally equipped in control systems. The control system allows for an activation of particular drives. In the present paper, some graph based models of these boxes are considered i.e. contour, bond and mixed graphs. An exemplary automatic gear box is considered. Based upon the introduced models, ratios for some drives have been calculated. Advantages of the proposed method of modeling are: algorithmic approach and simplicity.

  7. Study on automatic testing network based on LXI

    NASA Astrophysics Data System (ADS)

    Hu, Qin; Xu, Xing

    2006-11-01

    LXI (LAN eXtensions for Instrumentation), which is an extension of the widely used Ethernet technology in the automatic testing field, is the next generation instrumental platform. LXI standard is based on the industry standard Ethernet technolog, using the standard PC interface as the primary communication bus between devices. It implements the IEEE802.3 standard and supports TCP/IP protocol. LXI takes the advantage of the ease of use of GPIB-based instruments, the high performance and compact size of VXI/PXI instruments, and the flexibility and high throughput of Ethernet all at the same time. The paper firstly introduces the specification of LXI standard. Then, an automatic testing network architecture which is based on LXI platform is proposed. The automatic testing network is composed of several sets of LXI-based instruments, which are connected via an Ethernet switch or router. The network is computer-centric, and all the LXI-based instruments in the network are configured and initialized in computer. The computer controls the data acquisition, and displays the data on the screen. The instruments are using Ethernet connection as I/O interface, and can be triggered over a wired trigger interface, over LAN or over IEEE 1588 Precision Time Protocol running over the LAN interface. A hybrid automatic testing network comprised of LXI compliant devices and legacy instruments including LAN instruments as well as GPIB, VXI and PXI products connected via internal or external adaptors is also discussed at the end of the paper.

  8. An automatic image inpainting algorithm based on FCM.

    PubMed

    Liu, Jiansheng; Liu, Hui; Qiao, Shangping; Yue, Guangxue

    2014-01-01

    There are many existing image inpainting algorithms in which the repaired area should be manually determined by users. Aiming at this drawback of the traditional image inpainting algorithms, this paper proposes an automatic image inpainting algorithm which automatically identifies the repaired area by fuzzy C-mean (FCM) algorithm. FCM algorithm classifies the image pixels into a number of categories according to the similarity principle, making the similar pixels clustering into the same category as possible. According to the provided gray value of the pixels to be inpainted, we calculate the category whose distance is the nearest to the inpainting area and this category is to be inpainting area, and then the inpainting area is restored by the TV model to realize image automatic inpainting.

  9. An Automatic Image Inpainting Algorithm Based on FCM

    PubMed Central

    Liu, Jiansheng; Liu, Hui; Qiao, Shangping; Yue, Guangxue

    2014-01-01

    There are many existing image inpainting algorithms in which the repaired area should be manually determined by users. Aiming at this drawback of the traditional image inpainting algorithms, this paper proposes an automatic image inpainting algorithm which automatically identifies the repaired area by fuzzy C-mean (FCM) algorithm. FCM algorithm classifies the image pixels into a number of categories according to the similarity principle, making the similar pixels clustering into the same category as possible. According to the provided gray value of the pixels to be inpainted, we calculate the category whose distance is the nearest to the inpainting area and this category is to be inpainting area, and then the inpainting area is restored by the TV model to realize image automatic inpainting. PMID:24516358

  10. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor

    ERIC Educational Resources Information Center

    Rus, Vasile; Lintean, Mihai; Azevedo, Roger

    2009-01-01

    This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…

  11. Towards Automatic Processing of Virtual City Models for Simulations

    NASA Astrophysics Data System (ADS)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  12. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  13. Knowledge-based system for automatic MBR control.

    PubMed

    Comas, J; Meabe, E; Sancho, L; Ferrero, G; Sipma, J; Monclús, H; Rodriguez-Roda, I

    2010-01-01

    MBR technology is currently challenging traditional wastewater treatment systems and is increasingly selected for WWTP upgrading. MBR systems typically are constructed on a smaller footprint, and provide superior treated water quality. However, the main drawback of MBR technology is that the permeability of membranes declines during filtration due to membrane fouling, which for a large part causes the high aeration requirements of an MBR to counteract this fouling phenomenon. Due to the complex and still unknown mechanisms of membrane fouling it is neither possible to describe clearly its development by means of a deterministic model, nor to control it with a purely mathematical law. Consequently the majority of MBR applications are controlled in an "open-loop" way i.e. with predefined and fixed air scour and filtration/relaxation or backwashing cycles, and scheduled inline or offline chemical cleaning as a preventive measure, without taking into account the real needs of membrane cleaning based on its filtration performance. However, existing theoretical and empirical knowledge about potential cause-effect relations between a number of factors (influent characteristics, biomass characteristics and operational conditions) and MBR operation can be used to build a knowledge-based decision support system (KB-DSS) for the automatic control of MBRs. This KB-DSS contains a knowledge-based control module, which, based on real time comparison of the current permeability trend with "reference trends", aims at optimizing the operation and energy costs and decreasing fouling rates. In practice the automatic control system proposed regulates the set points of the key operational variables controlled in MBR systems (permeate flux, relaxation and backwash times, backwash flows and times, aeration flow rates, chemical cleaning frequency, waste sludge flow rate and recycle flow rates) and identifies its optimal value. This paper describes the concepts and the 3-level architecture

  14. Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis

    NASA Astrophysics Data System (ADS)

    Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.

    2013-11-01

    Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.

  15. A Network of Automatic Control Web-Based Laboratories

    ERIC Educational Resources Information Center

    Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian

    2011-01-01

    This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…

  16. On Automatic Support to Indexing a Life Sciences Data Base.

    ERIC Educational Resources Information Center

    Vleduts-Stokolov, N.

    1982-01-01

    Describes technique developed as automatic support to subject heading indexing at BIOSIS based on use of formalized language for semantic representation of biological texts and subject headings. Language structures, experimental results, and analysis of journal/subject heading and author/subject heading correlation data are discussed. References…

  17. A Network of Automatic Control Web-Based Laboratories

    ERIC Educational Resources Information Center

    Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian

    2011-01-01

    This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…

  18. Automatic mapping and modeling of human networks

    NASA Astrophysics Data System (ADS)

    Pentland, Alex (Sandy)

    2007-05-01

    Mobile telephones, company ID badges, and similar common devices form a sensor network which can be used to map human activity, and especially human interactions. The most informative sensor data seem to be measurements of person-to-person proximity, and statistics of vocalization and body movement measurements. Using this data to model individual behavior as a stochastic process allows prediction of future activity, with the greatest predictive power obtained by modeling the interactions between individual processes. Experiments show that between 40% and 95% of the variance in human behavior may be explained by such models.

  19. Automatic Paroxysmal Atrial Fibrillation Based on not Fibrillating ECGs.

    PubMed

    Ros, E; Mota, S; Toro, F J; Díaz, A F; Fernández, F J

    2004-01-01

    The objective of the paper is to describe an automatic algorithm for Paroxysmal Atrial Fibrillation (PAF) Detection, based on parameters extracted from ECG traces with no atrial fibrillation episode. The modular automatic classification algorithm for PAF diagnosis is developed and evaluated with different parameter configurations. The database used in this study was provided by Physiobank for The Computers in Cardiology Challenge 2001. Each ECG file in this database was translated into a 48 parameter vector. The modular classification algorithm used for PAF diagnosis was based on the nearest K-neighbours. Several configuration options were evaluated to optimize the classification performance. Different configurations of the proposed modular classification algorithm were tested. The uni-parametric approach achieved a top classification rate value of 76%. A multi-parametric approach was configured using the 5 parameters with highest discrimination power, and a top classification rate of 80% was achieved; different functions to typify the parameters were tested. Finally, two automatic parametric scanning strategies, Forward and Backward methods, were adopted. The results obtained with these approaches achieved a top classification rate of 92%. A modular classification algorithm based on the nearest K-neighbours was designed. The classification performance of the algorithm was evaluated using different parameter configurations, typification functions and number of K-neighbors. The automatic parametric scanning techniques achieved much better results than previously tested configurations.

  20. Time series modeling for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Sokolnikov, Andre

    2012-05-01

    Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the prediction process for the image creation or reconstruction. The results are provided.

  1. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  2. A manual and an automatic TERS based virus discrimination

    NASA Astrophysics Data System (ADS)

    Olschewski, Konstanze; Kämmer, Evelyn; Stöckel, Stephan; Bocklitz, Thomas; Deckert-Gaudig, Tanja; Zell, Roland; Cialla-May, Dana; Weber, Karina; Deckert, Volker; Popp, Jürgen

    2015-02-01

    Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%.Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses

  3. Automatic Structure-Based Code Generation from Coloured Petri Nets: A Proof of Concept

    NASA Astrophysics Data System (ADS)

    Kristensen, Lars Michael; Westergaard, Michael

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs (PP-CPNs) which is a subclass of CPNs equipped with an explicit separation of process control flow, message passing, and access to shared and local data. We show how PP-CPNs caters for a four phase structure-based automatic code generation process directed by the control flow of processes. The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF).

  4. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2016-08-01

    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  5. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  6. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  7. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  8. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  9. Automatic computational models of acoustical category features: Talking versus singing

    NASA Astrophysics Data System (ADS)

    Gerhard, David

    2003-10-01

    The automatic discrimination between acoustical categories has been an increasingly interesting problem in the fields of computer listening, multimedia databases, and music information retrieval. A system is presented which automatically generates classification models, given a set of destination classes and a set of a priori labeled acoustic events. Computational models are created using comparative probability density estimations. For the specific example presented, the destination classes are talking and singing. Individual feature models are evaluated using two measures: The Kologorov-Smirnov distance measures feature separation, and accuracy is measured using absolute and relative metrics. The system automatically segments the event set into a user-defined number (n) of development subsets, and runs a development cycle for each set, generating n separate systems, each of which is evaluated using the above metrics to improve overall system accuracy and to reduce inherent data skew from any one development subset. Multiple features for the same acoustical categories are then compared for underlying feature overlap using cross-correlation. Advantages of automated computational models include improved system development and testing, shortened development cycle, and automation of common system evaluation tasks. Numerical results are presented relating to the talking/singing classification problem.

  10. Edge density based automatic detection of inflammation in colonoscopy videos.

    PubMed

    Ševo, I; Avramović, A; Balasingham, I; Elle, O J; Bergsland, J; Aabakken, L

    2016-05-01

    Colon cancer is one of the deadliest diseases where early detection can prolong life and can increase the survival rates. The early stage disease is typically associated with polyps and mucosa inflammation. The often used diagnostic tools rely on high quality videos obtained from colonoscopy or capsule endoscope. The state-of-the-art image processing techniques of video analysis for automatic detection of anomalies use statistical and neural network methods. In this paper, we investigated a simple alternative model-based approach using texture analysis. The method can easily be implemented in parallel processing mode for real-time applications. A characteristic texture of inflamed tissue is used to distinguish between inflammatory and healthy tissues, where an appropriate filter kernel was proposed and implemented to efficiently detect this specific texture. The basic method is further improved to eliminate the effect of blood vessels present in the lower part of the descending colon. Both approaches of the proposed method were described in detail and tested in two different computer experiments. Our results show that the inflammatory region can be detected in real-time with an accuracy of over 84%. Furthermore, the experimental study showed that it is possible to detect certain segments of video frames containing inflammations with the detection accuracy above 90%.

  11. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can

  12. MeSH indexing based on automatically generated summaries.

    PubMed

    Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto

    2013-06-26

    MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most

  13. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    PubMed

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-09-12

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  14. Research on Air Traffic Control Automatic System Software Reliability Based on Markov Chain

    NASA Astrophysics Data System (ADS)

    Wang, Xinglong; Liu, Weixiang

    Ensuring the space of air craft and high efficiency of air traffic are the main job tasks of the air traffic control automatic system. An Air Traffic Control Automatic System (ATCAS) and Markov model is put forward in this paper, which collected the 36 month failure data of ATCAS; A method to predict the s1,s2,s3 of ATCAS is based on Markov chain which predicts and validates the Reliability of ATCTS according to the deriving theory of Reliability. The experimental results show that the method can be used for the future research and proved to be practicable.

  15. Automatic seamless image mosaic method based on SIFT features

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wen, Desheng

    2017-02-01

    An automatic seamless image mosaic method based on SIFT features is proposed. First a scale-invariant feature extracting algorithm SIFT is used for feature extraction and matching, which gains sub-pixel precision for features extraction. Then, the transforming matrix H is computed with improved PROSAC algorithm , compared with RANSAC algorithm, the calculate efficiency is advanced, and the number of the inliers are more. Then the transforming matrix H is purify with LM algorithm. And finally image mosaic is completed with smoothing algorithm. The method implements automatically and avoids the disadvantages of traditional image mosaic method under different scale and illumination conditions. Experimental results show the image mosaic effect is wonderful and the algorithm is stable very much. It is high valuable in practice.

  16. Automatic Tortuosity-Based Retinopathy of Prematurity Screening System

    NASA Astrophysics Data System (ADS)

    Sukkaew, Lassada; Uyyanonvara, Bunyarit; Makhanov, Stanislav S.; Barman, Sarah; Pangputhipong, Pannet

    Retinopathy of Prematurity (ROP) is an infant disease characterized by increased dilation and tortuosity of the retinal blood vessels. Automatic tortuosity evaluation from retinal digital images is very useful to facilitate an ophthalmologist in the ROP screening and to prevent childhood blindness. This paper proposes a method to automatically classify the image into tortuous and non-tortuous. The process imitates expert ophthalmologists' screening by searching for clearly tortuous vessel segments. First, a skeleton of the retinal blood vessels is extracted from the original infant retinal image using a series of morphological operators. Next, we propose to partition the blood vessels recursively using an adaptive linear interpolation scheme. Finally, the tortuosity is calculated based on the curvature of the resulting vessel segments. The retinal images are then classified into two classes using segments characterized by the highest tortuosity. For an optimal set of training parameters the prediction is as high as 100%.

  17. The acoustic-modeling problem in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Brown, Peter F.

    1987-12-01

    This thesis examines the acoustic-modeling problem in automatic speech recognition from an information-theoretic point of view. This problem is to design a speech-recognition system which can extract from the speech waveform as much information as possible about the corresponding word sequence. The information extraction process is broken down into two steps: a signal processing step which converts a speech waveform into a sequence of information bearing acoustic feature vectors, and a step which models such a sequence. This thesis is primarily concerned with the use of hidden Markov models to model sequences of feature vectors which lie in a continuous space such as R sub N. It explores the trade-off between packing a lot of information into such sequences and being able to model them accurately. The difficulty of developing accurate models of continuous parameter sequences is addressed by investigating a method of parameter estimation which is specifically designed to cope with inaccurate modeling assumptions.

  18. Aircraft automatic flight control system with model inversion

    NASA Technical Reports Server (NTRS)

    Smith, G. A.; Meyer, George

    1990-01-01

    A simulator study was conducted to verify the advantages of a Newton-Raphson model-inversion technique as a design basis for an automatic trajectory control system in an aircraft with highly nonlinear characteristics. The simulation employed a detailed mathematical model of the aerodynamic and propulsion system performance characteristics of a vertical-attitude takeoff and landing tactical aircraft. The results obtained confirm satisfactory control system performance over a large portion of the flight envelope. System response to wind gusts was satisfactory for various plausible combinations of wind magnitude and direction.

  19. Wind modeling and lateral control for automatic landing

    NASA Technical Reports Server (NTRS)

    Holley, W. E.; Bryson, A. E., Jr.

    1975-01-01

    For the purposes of aircraft control system design and analysis, the wind can be characterized by a mean component which varies with height and by turbulent components which are described by the von Karman correlation model. The aircraft aero-dynamic forces and moments depend linearly on uniform and gradient gust components obtained by averaging over the aircraft's length and span. The correlations of the averaged components are then approximated by the outputs of linear shaping filters forced by white noise. The resulting model of the crosswind shear and turbulence effects is used in the design of a lateral control system for the automatic landing of a DC-8 aircraft.

  20. Automatic learning-based beam angle selection for thoracic IMRT

    SciTech Connect

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G. Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  1. A learning-based automatic spinal MRI segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Samarabandu, Jagath; Garvin, Greg; Chhem, Rethy; Li, Shuo

    2008-03-01

    Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances the clinical diagnosis. Although many algorithms have been proposed, it is still challenging to achieve an automatic clinical segmentation which requires speed and robustness. Automatically segmenting the vertebral column in Magnetic Resonance Imaging (MRI) image is extremely challenging as variations in soft tissue contrast and radio-frequency (RF) in-homogeneities cause image intensity variations. Moveover, little work has been done in this area. We proposed a generic slice-independent, learning-based method to automatically segment the vertebrae in spinal MRI images. A main feature of our contributions is that the proposed method is able to segment multiple images of different slices simultaneously. Our proposed method also has the potential to be imaging modality independent as it is not specific to a particular imaging modality. The proposed method consists of two stages: candidate generation and verification. The candidate generation stage is aimed at obtaining the segmentation through the energy minimization. In this stage, images are first partitioned into a number of image regions. Then, Support Vector Machines (SVM) is applied on those pre-partitioned image regions to obtain the class conditional distributions, which are then fed into an energy function and optimized with the graph-cut algorithm. The verification stage applies domain knowledge to verify the segmented candidates and reject unsuitable ones. Experimental results show that the proposed method is very efficient and robust with respect to image slices.

  2. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively.

  3. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  4. Automatic textual annotation of video news based on semantic visual object extraction

    NASA Astrophysics Data System (ADS)

    Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem

    2003-12-01

    In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.

  5. Automatic data processing and crustal modeling on Brazilian Seismograph Network

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.; Chimpliganond, C.; Peres Rocha, M.; Franca, G.; Marotta, G. S.; Von Huelsen, M. G.

    2014-12-01

    The Brazilian Seismograph Network (RSBR) is a joint project of four Brazilian research institutions with the support of Petrobras and its main goal is to monitor the seismic activities, generate alerts of seismic hazard and provide data for Brazilian tectonic and structure research. Each institution operates and maintain their seismic network, sharing their data in an virtual private network. These networks have seismic stations transmitting in real time (or near real time) raw data to their respective data centers, where the seismogram files are then shared with other institutions. Currently RSBR has 57 broadband stations, some of them operating since 1994, transmitting data through mobile phone data networks or satellite links. Station management, data acquisition and storage and earthquake data processing at the Seismological Observatory of the University of Brasilia is automatically performed by SeisComP3 (SC3). However, the SC3 data processing is limited to event detection, location and magnitude. An automatic crustal modeling system was designed process raw seismograms and generate 1D S-velocity profiles. This system automatically calculates receiver function (RF) traces, Vp/Vs ratio (h-k stack) and surface waves dispersion (SWD) curves. These traces and curves are then used to calibrate the lithosphere seismic velocity models using a joint inversion scheme The results can be reviewed by an analyst, change processing parameters and selecting/neglecting RF traces and SWD curves used in lithosphere model calibration. The results to be obtained from this system will be used to generate and update a quasi-3D crustal model of Brazil's territory.

  6. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  7. Pavement crack identification based on automatic threshold iterative method

    NASA Astrophysics Data System (ADS)

    Lu, Guofeng; Zhao, Qiancheng; Liao, Jianguo; He, Yongbiao

    2017-01-01

    Crack detection is an important issue in concrete infrastructure. Firstly, the accuracy of crack geometry parameters measurement is directly affected by the extraction accuracy, the same as the accuracy of the detection system. Due to the properties of unpredictability, randomness and irregularity, it is difficult to establish recognition model of crack. Secondly, various image noise, caused by irregular lighting conditions, dark spots, freckles and bump, exerts an influence on the crack detection accuracy. Peak threshold selection method is improved in this paper, and the processing of enhancement, smoothing and denoising is conducted before iterative threshold selection, which can complete the automatic selection of the threshold value in real time and stability.

  8. Automatic extraction of road networks from remotely sensed images based on GIS knowledge

    NASA Astrophysics Data System (ADS)

    Sui, Haigang; Hua, Li; Gong, Jianya

    2003-06-01

    Automatic extracting and updating road networks is a key work for updating geo-spatial information especially in developing countries. In this paper, a whole framework for automatic road extraction is presented firstly. Then the strategy and algorithms using GIS data for road extraction are discussed. A hybrid method based on structure information and statistical information for road extraction is emphasized in this paper. Different extraction strategy and grouping techniques are employed for different extracting methods. Because of the importance of structure information in road extraction, the extraction of candidate road segments based on structure information is described. For road extraction from images with different resolution based on structure information, different grouping technique is applied. The grouping technique based on whole relation and the grouping technique based on new profile tracing algorithm is separately employed for images with low resolution and with high resolution. The road extraction based on statistical information is the supplement of structure information. A new statistical model is presented and the candidate road-tracing algorithm based on adaptive template is discussed. And the grouping based on ribbon-snake model is briefly introduced. Automatic road recognition is a necessary task for automatic extracting road networks. So aiming at this we put all kinds of road recognition knowledge into the knowledge base and build a road recognition expert system. The fuzzy theory is applied for representing road models and road knowledge reasoning. The strategy for using global information to guide the further road extraction is presented. At last some examples and the summary are given.

  9. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  10. Automatic indexing of scanned documents: a layout-based approach

    NASA Astrophysics Data System (ADS)

    Esser, Daniel; Schuster, Daniel; Muthmann, Klemens; Berger, Michael; Schill, Alexander

    2012-01-01

    Archiving official written documents such as invoices, reminders and account statements in business and private area gets more and more important. Creating appropriate index entries for document archives like sender's name, creation date or document number is a tedious manual work. We present a novel approach to handle automatic indexing of documents based on generic positional extraction of index terms. For this purpose we apply the knowledge of document templates stored in a common full text search index to find index positions that were successfully extracted in the past.

  11. A fully automatic system for acid-base coulometric titrations

    PubMed Central

    Cladera, A.; Caro, A.; Estela, J. M.; Cerdà, V.

    1990-01-01

    An automatic system for acid-base titrations by electrogeneration of H+ and OH- ions, with potentiometric end-point detection, was developed. The system includes a PC-compatible computer for instrumental control, data acquisition and processing, which allows up to 13 samples to be analysed sequentially with no human intervention. The system performance was tested on the titration of standard solutions, which it carried out with low errors and RSD. It was subsequently applied to the analysis of various samples of environmental and nutritional interest, specifically waters, soft drinks and wines. PMID:18925283

  12. Automatic target recognition based on cross-plot.

    PubMed

    Wong, Kelvin Kian Loong; Abbott, Derek

    2011-01-01

    Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository.

  13. Automatic Target Recognition Based on Cross-Plot

    PubMed Central

    Wong, Kelvin Kian Loong; Abbott, Derek

    2011-01-01

    Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository. PMID:21980508

  14. Research on fiber diameter automatic measurement based on image detection

    NASA Astrophysics Data System (ADS)

    Chen, Xiaogang; Jiang, Yu; Shen, Wen; Han, Guangjie

    2010-10-01

    In this paper, we present a method of Fiber Diameter Automatic Measurement(FDAM). This design is based on image detection technology in order to provide a rapid and accurate measurement of average fiber diameter. Firstly, a preprocessing mechanism is proposed to the sample fiber image by using improved median filtering algorithm, then we introduce edge detection with Sobel operator to detect target fiber, finally diameter of random point and average diameter of the fiber can be measured precisely with searching shortest path algorithm. Experiments are conducted to prove the accuracy of the measurement, and simulations show that measurement errors caused by human factors could be eliminated to a desirable level.

  15. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  16. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  17. Automatic translation of digraph to fault-tree models

    NASA Astrophysics Data System (ADS)

    Iverson, David L.

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  18. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  19. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  20. Automatic feature template generation for maximum entropy based intonational phrase break prediction

    NASA Astrophysics Data System (ADS)

    Zhou, You

    2013-03-01

    The prediction of intonational phrase (IP) breaks is important for both the naturalness and intelligibility of Text-to- Speech (TTS) systems. In this paper, we propose a maximum entropy (ME) model to predict IP breaks from unrestricted text, and evaluate various keyword selection approaches in different domains. Furthermore, we design a hierarchical clustering algorithm for automatic generation of feature templates, which minimizes the need for human supervision during ME model training. Results of comparative experiments show that, for the task of IP break prediction, ME model obviously outperforms classification and regression tree (CART), log-likelihood ratio is the best scoring measure of keyword selection, compared with manual templates, templates automatically generated by our approach greatly improves the F-score of ME based IP break prediction, and significantly reduces the size of ME model.

  1. The Automatic Measuring Machines and Ground-Based Astrometry

    NASA Astrophysics Data System (ADS)

    Sergeeva, T. P.

    The introduction of the automatic measuring machines into the astronomical investigations a little more then a quarter of the century ago has increased essentially the range and the scale of projects which the astronomers could capable to realize since then. During that time, there have been dozens photographic sky surveys, which have covered all of the sky more then once. Due to high accuracy and speed of automatic measuring machines the photographic astrometry has obtained the opportunity to create the high precision catalogs such as CpC2. Investigations of the structure and kinematics of the stellar components of our Galaxy has been revolutionized in the last decade by the advent of automated plate measuring machines. But in an age of rapidly evolving electronic detectors and space-based catalogs, expected soon, one could think that the twilight hours of astronomical photography have become. On opposite of that point of view such astronomers as D.Monet (U.S.N.O.), L.G.Taff (STScI), M.K.Tsvetkov (IA BAS) and some other have contended the several ways of the photographic astronomy evolution. One of them sounds as: "...special efforts must be taken to extract useful information from the photographic archives before the plates degrade and the technology required to measure them disappears". Another is the minimization of the systematic errors of ground-based star catalogs by employment of certain reduction technology and a dense enough and precise space-based star reference catalogs. In addition to that the using of the higher resolution and quantum efficiency emulsions such as Tech Pan and some of the new methods of processing of the digitized information hold great promise for future deep (B<25) surveys (Bland-Hawthorn et al. 1993, AJ, 106, 2154). Thus not only the hard working of all existing automatic measuring machines is apparently needed but the designing, development and employment of a new generation of portable, mobile scanners is very necessary. The

  2. Automatic Dynamic Aircraft Modeler (ADAM) for the Computer Program NASTRAN

    NASA Technical Reports Server (NTRS)

    Griffis, H.

    1985-01-01

    Large general purpose finite element programs require users to develop large quantities of input data. General purpose pre-processors are used to decrease the effort required to develop structural models. Further reduction of effort can be achieved by specific application pre-processors. Automatic Dynamic Aircraft Modeler (ADAM) is one such application specific pre-processor. General purpose pre-processors use points, lines and surfaces to describe geometric shapes. Specifying that ADAM is used only for aircraft structures allows generic structural sections, wing boxes and bodies, to be pre-defined. Hence with only gross dimensions, thicknesses, material properties and pre-defined boundary conditions a complete model of an aircraft can be created.

  3. Machine learning-based automatic detection of pulmonary trunk

    NASA Astrophysics Data System (ADS)

    Wu, Hong; Deng, Kun; Liang, Jianming

    2011-03-01

    Pulmonary embolism is a common cardiovascular emergency with about 600,000 cases occurring annually and causing approximately 200,000 deaths in the US. CT pulmonary angiography (CTPA) has become the reference standard for PE diagnosis, but the interpretation of these large image datasets is made complex and time consuming by the intricate branching structure of the pulmonary vessels, a myriad of artifacts that may obscure or mimic PEs, and suboptimal bolus of contrast and inhomogeneities with the pulmonary arterial blood pool. To meet this challenge, several approaches for computer aided diagnosis of PE in CTPA have been proposed. However, none of these approaches is capable of detecting central PEs, distinguishing the pulmonary artery from the vein to effectively remove any false positives from the veins, and dynamically adapting to suboptimal contrast conditions associated the CTPA scans. To overcome these shortcomings, it requires highly efficient and accurate identification of the pulmonary trunk. For this very purpose, in this paper, we present a machine learning based approach for automatically detecting the pulmonary trunk. Our idea is to train a cascaded AdaBoost classifier with a large number of Haar features extracted from CTPA image samples, so that the pulmonary trunk can be automatically identified by sequentially scanning the CTPA images and classifying each encountered sub-image with the trained classifier. Our approach outperforms an existing anatomy-based approach, requiring no explicit representation of anatomical knowledge and achieving a nearly 100% accuracy tested on a large number of cases.

  4. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  5. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  6. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis.

    PubMed

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control.

  7. Automatic Tooth Segmentation of Dental Mesh Based on Harmonic Fields.

    PubMed

    Liao, Sheng-hui; Liu, Shi-jian; Zou, Bei-ji; Ding, Xi; Liang, Ye; Huang, Jun-hui

    2015-01-01

    An important preprocess in computer-aided orthodontics is to segment teeth from the dental models accurately, which should involve manual interactions as few as possible. But fully automatic partition of all teeth is not a trivial task, since teeth occur in different shapes and their arrangements vary substantially from one individual to another. The difficulty is exacerbated when severe teeth malocclusion and crowding problems occur, which is a common occurrence in clinical cases. Most published methods in this area either are inaccurate or require lots of manual interactions. Motivated by the state-of-the-art general mesh segmentation methods that adopted the theory of harmonic field to detect partition boundaries, this paper proposes a novel, dental-targeted segmentation framework for dental meshes. With a specially designed weighting scheme and a strategy of a priori knowledge to guide the assignment of harmonic constraints, this method can identify teeth partition boundaries effectively. Extensive experiments and quantitative analysis demonstrate that the proposed method is able to partition high-quality teeth automatically with robustness and efficiency.

  8. SVM-based automatic diagnosis method for keratoconus

    NASA Astrophysics Data System (ADS)

    Gao, Yuhong; Wu, Qiang; Li, Jing; Sun, Jiande; Wan, Wenbo

    2017-06-01

    Keratoconus is a progressive cornea disease that can lead to serious myopia and astigmatism, or even to corneal transplantation, if it becomes worse. The early detection of keratoconus is extremely important to know and control its condition. In this paper, we propose an automatic diagnosis algorithm for keratoconus to discriminate the normal eyes and keratoconus ones. We select the parameters obtained by Oculyzer as the feature of cornea, which characterize the cornea both directly and indirectly. In our experiment, 289 normal cases and 128 keratoconus cases are divided into training and test sets respectively. Far better than other kernels, the linear kernel of SVM has sensitivity of 94.94% and specificity of 97.87% with all the parameters training in the model. In single parameter experiment of linear kernel, elevation with 92.03% sensitivity and 98.61% specificity and thickness with 97.28% sensitivity and 97.82% specificity showed their good classification abilities. Combining elevation and thickness of the cornea, the proposed method can reach 97.43% sensitivity and 99.19% specificity. The experiments demonstrate that the proposed automatic diagnosis method is feasible and reliable.

  9. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    PubMed Central

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  10. Automatic domain decomposition of proteins by a Gaussian Network Model.

    PubMed

    Kundu, Sibsankar; Sorensen, Dan C; Phillips, George N

    2004-12-01

    Proteins are often comprised of domains of apparently independent folding units. These domains can be defined in various ways, but one useful definition divides the protein into substructures that seem to move more or less independently. The same methods that allow fairly accurate calculation of motion can be used to help classify these substructures. We show how the Gaussian Network Model (GNM), commonly used for determining motion, can also be adapted to automatically classify domains in proteins. Parallels between this physical network model and graph theory implementation are apparent. The method is applied to a nonredundant set of 55 proteins, and the results are compared to the visual assignments by crystallographers. Apart from decomposing proteins into structural domains, the algorithm can generally be applied to any large macromolecular system to decompose it into motionally decoupled sub-systems. Copyright 2004 Wiley-Liss, Inc.

  11. Automatic identification of activity-rest periods based on actigraphy.

    PubMed

    Crespo, Cristina; Aboy, Mateo; Fernández, José Ramón; Mojón, Artemio

    2012-04-01

    We describe a novel algorithm for identification of activity/rest periods based on actigraphy signals designed to be used for a proper estimation of ambulatory blood pressure monitoring parameters. Automatic and accurate determination of activity/rest periods is critical in cardiovascular risk assessment applications including the evaluation of dipper versus non-dipper status. The algorithm is based on adaptive rank-order filters, rank-order decision logic, and morphological processing. The algorithm was validated on a database of 104 subjects including actigraphy signals for both the dominant and non-dominant hands (i.e., 208 actigraphy recordings). The algorithm achieved a mean performance above 94.0%, with an average number of 0.02 invalid transitions per 48 h.

  12. Automatic Parking of Self-Driving CAR Based on LIDAR

    NASA Astrophysics Data System (ADS)

    Lee, B.; Wei, Y.; Guo, I. Y.

    2017-09-01

    To overcome the deficiency of ultrasonic sensor and camera, this paper proposed a method of autonomous parking based on the self-driving car, using HDL-32E LiDAR. First the 3-D point cloud data was preprocessed. Then we calculated the minimum size of parking space according to the dynamic theories of vehicle. Second the rapidly-exploring random tree algorithm (RRT) algorithm was improved in two aspects based on the moving characteristic of autonomous car. And we calculated the parking path on the basis of the vehicle's dynamics and collision constraints. Besides, we used the fuzzy logic controller to control the brake and accelerator in order to realize the stably of speed. At last the experiments were conducted in an autonomous car, and the results show that the proposed automatic parking system is feasible and effective.

  13. Automatic classification of visual evoked potentials based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  14. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  15. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  16. Automatic validation of computational models using pseudo-3D spatio-temporal model checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2014-12-02

    Computational models play an increasingly important role in systems biology for generating predictions and in synthetic biology as executable prototypes/designs. For real life (clinical) applications there is a need to scale up and build more complex spatio-temporal multiscale models; these could enable investigating how changes at small scales reflect at large scales and viceversa. Results generated by computational models can be applied to real life applications only if the models have been validated first. Traditional in silico model checking techniques only capture how non-dimensional properties (e.g. concentrations) evolve over time and are suitable for small scale systems (e.g. metabolic pathways). The validation of larger scale systems (e.g. multicellular populations) additionally requires capturing how spatial patterns and their properties change over time, which are not considered by traditional non-spatial approaches. We developed and implemented a methodology for the automatic validation of computational models with respect to both their spatial and temporal properties. Stochastic biological systems are represented by abstract models which assume a linear structure of time and a pseudo-3D representation of space (2D space plus a density measure). Time series data generated by such models is provided as input to parameterised image processing modules which automatically detect and analyse spatial patterns (e.g. cell) and clusters of such patterns (e.g. cellular population). For capturing how spatial and numeric properties change over time the Probabilistic Bounded Linear Spatial Temporal Logic is introduced. Given a collection of time series data and a formal spatio-temporal specification the model checker Mudi ( http://mudi.modelchecking.org ) determines probabilistically if the formal specification holds for the computational model or not. Mudi is an approximate probabilistic model checking platform which enables users to choose between frequentist and

  17. Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.

    PubMed

    Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V

    2013-09-10

    The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model.

  18. A CNN based Hybrid approach towards automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal V.; Katiyar, Sunil K.

    2013-06-01

    Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i

  19. An extended Kalman filter based automatic frequency control loop

    NASA Technical Reports Server (NTRS)

    Hinedi, S.

    1988-01-01

    An Automatic Frequency Control (AFC) loop based on an Extended Kalman Filter (EKF) is introduced and analyzed in detail. The scheme involves an EKF which operates on a modified set of data in order to track the frequency of the incoming signal. The algorithm can also be viewed as a modification to the well known cross-product AFC loop. A low carrier-to-noise ratio (CNR), high-dynamic environment is used to test the algorithm and the probability of loss-of-lock is assessed via computer simulations. The scheme is best suited for scenarios in which the frequency error variance can be compromised to achieve a very low operating CNR threshold. This technique can easily be incorporated in the Advanced Receiver (ARX), requiring minimum software modifications.

  20. Weighted ensemble based automatic detection of exudates in fundus photographs.

    PubMed

    Prentasic, Pavle; Loncaric, Sven

    2014-01-01

    Diabetic retinopathy (DR) is a visual complication of diabetes, which has become one of the leading causes of preventable blindness in the world. Exudate detection is an important problem in automatic screening systems for detection of diabetic retinopathy using color fundus photographs. In this paper, we present a method for detection of exudates in color fundus photographs, which combines several preprocessing and candidate extraction algorithms to increase the exudate detection accuracy. The first stage of the method consists of an ensemble of several exudate candidate extraction algorithms. In the learning phase, simulated annealing is used to determine weights for combining the results of the ensemble candidate extraction algorithms. The second stage of the method uses a machine learning-based classification for detection of exudate regions. The experimental validation was performed using the DRiDB color fundus image set. The validation has demonstrated that the proposed method achieved higher accuracy in comparison to state-of-the art methods.

  1. Synthetic aperture radar automatic target recognition based on curvelet transform

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Zhuo; Jiao, Licheng; He, Jun

    2009-10-01

    A novel synthetic aperture radar (SAR) automatic target recognition (ATR) approach based on Curvelet Transform is proposed. However, the existing approaches can not extract the more effective feature. In this paper, our method is concentrated on a new effective representation of the moving and stationary target acquisition and recognition (MSTAR) database to obtain a more accurate target region and reduce feature dimension. Firstly, MSTAR database can be extracted feature through the optimal sparse representation by curvelets to obtain a clear target region. However, considering the loss of part of edges of image. We extract coarse feature, which is to compensate fine feature error brought by segmentation. The final features consisting of fine and coarse feature are classified by SVM with Gaussian radial basis function (RBF) kernel. The experiments show that our proposed algorithm can achieve a better correct classification rate.

  2. Modeling of a data exchange process in the Automatic Process Control System on the base of the universal SCADA-system

    NASA Astrophysics Data System (ADS)

    Topolskiy, D.; Topolskiy, N.; Solomin, E.; Topolskaya, I.

    2016-04-01

    In the present paper the authors discuss some ways of solving energy saving problems in mechanical engineering. In authors' opinion one of the ways of solving this problem is integrated modernization of power engineering objects of mechanical engineering companies, which should be intended for the energy supply control efficiency increase and electric energy commercial accounting improvement. The author have proposed the usage of digital current and voltage transformers for these purposes. To check the compliance of this equipment with the IEC 61850 International Standard, we have built a mathematic model of the data exchange process between measuring transformers and a universal SCADA-system. The results of modeling show that the discussed equipment corresponds to the mentioned Standard requirements and the usage of the universal SCADA-system for these purposes is preferable and economically reasonable. In modeling the authors have used the following software: MasterScada, Master OPC_DI_61850, OPNET.

  3. Speedup for quantum optimal control from automatic differentiation based on graphics processing units

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David

    2017-04-01

    We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

  4. Automatic cortical sulcal parcellation based on surface principal direction flow field tracking.

    PubMed

    Li, Gang; Guo, Lei; Nie, Jingxin; Liu, Tianming

    2009-01-01

    Automatic parcellation of cortical surfaces into sulcal based regions is of great importance in structural and functional mapping of human brain. In this paper, a novel method is proposed for automatic cortical sulcal parcellation based on the geometric characteristics of the cortical surface including its principal curvatures and principal directions. This method is composed of two major steps: 1) employing the hidden Markov random field model (HMRF) and the expectation maximization (EM) algorithm on the maximum principal curvatures of the cortical surface for sulcal region segmentation, and 2) using a principal direction flow field tracking method on the cortical surface for sulcal basin segmentation. The flow field is obtained by diffusing the principal direction field on the cortical surface. The method has been successfully applied to the inner cortical surfaces of twelve healthy human brain MR images. Both quantitative and qualitative evaluation results demonstrate the validity and efficiency of the proposed method.

  5. Towards Automatic Semantic Labelling of 3D City Models

    NASA Astrophysics Data System (ADS)

    Rook, M.; Biljecki, F.; Diakité, A. A.

    2016-10-01

    The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.

  6. Automatic Construction of Anomaly Detectors from Graphical Models

    SciTech Connect

    Ferragut, Erik M; Darmon, David M; Shue, Craig A; Kelley, Stephen

    2011-01-01

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.

  7. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  8. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  9. Model of human/liquid cooling garment interaction for space suit automatic thermal control.

    PubMed

    Nyberg, K L; Diller, K R; Wissler, E H

    2001-02-01

    The Wissler human thermoregulation model was augmented to incorporate simulation of a space suit thermal control system that includes interaction with a liquid cooled garment (LCG) and ventilation gas flow through the suit. The model was utilized in the design process of an automatic controller intended to maintain thermal neutrality of an exercising subject wearing a liquid cooling garment. An experimental apparatus was designed and built to test the efficacy of specific physiological state measurements to provide feedback data for input to the automatic control algorithm. Control of the coolant inlet temperature to the LCG was based on evaluation of transient physiological parameters that describe the thermal state of the subject, including metabolic rate, skin temperatures, and core temperature. Experimental evaluation of the control algorithm function was accomplished in an environmental chamber under conditions that simulated the thermal environment of a space suit and transient metabolic work loads typical of astronaut extravehicular activity (EVA). The model was also applied to analyze experiments to evaluate performance of the automatic control system in maintaining thermal comfort during extensive transient metabolic profiles for a range of environmental temperatures. Finally, the model was used to predict the efficacy of the LCG thermal controller for providing thermal comfort for a variety of regiments that may be encountered in future space missions. Simulations with the Wissler model accurately predicted the thermal interaction between the subject and LCG for a wide range of metabolic profiles and environmental conditions and matched the function of the automatic temperature controller for inlet cooling water to the LCG.

  10. Enhancing Automaticity through Task-Based Language Learning

    ERIC Educational Resources Information Center

    De Ridder, Isabelle; Vangehuchten, Lieve; Gomez, Marta Sesena

    2007-01-01

    In general terms automaticity could be defined as the subconscious condition wherein "we perform a complex series of tasks very quickly and efficiently, without having to think about the various components and subcomponents of action involved" (DeKeyser 2001: 125). For language learning, Segalowitz (2003) characterised automaticity as a…

  11. Enhancing Automaticity through Task-Based Language Learning

    ERIC Educational Resources Information Center

    De Ridder, Isabelle; Vangehuchten, Lieve; Gomez, Marta Sesena

    2007-01-01

    In general terms automaticity could be defined as the subconscious condition wherein "we perform a complex series of tasks very quickly and efficiently, without having to think about the various components and subcomponents of action involved" (DeKeyser 2001: 125). For language learning, Segalowitz (2003) characterised automaticity as a…

  12. Search-matching algorithm for acoustics-based automatic sniper localization

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    Most of modern automatic sniper localization systems are based on the utilization of the acoustical emissions produced by the gun fire events. In order to estimate the spatial coordinates of the sniper location, these systems measures the time delay of arrival of the acoustical shock wave fronts to a microphone array. In more advanced systems, model based estimation of the nonlinear distortion parameters of the N-waves is used to make projectile trajectory and calibre estimations. In this work we address the sniper localization problem using a model based search-matching approach. The automatic sniper localization algorithm works searching for the acoustics model of ballistic shock waves which best matches the measured data. For this purpose, we implement a previously released acoustics model of ballistic shock waves. Further, the sniper location, the projectile trajectory and calibre, and the muzzle velocity are regarded as the inputs variables of such a model. A search algorithm is implemented in order to found what combination of the input variables minimize a fitness function defined as the distance between measured and simulated data. In such a way, the sniper location, the projectile trajectory and calibre, and the muzzle velocity can be found. In order to evaluate the performance of the algorithm, we conduct computer based experiments using simulated gunfire event data calculated at the nodes of a virtual distributed sensor network. Preliminary simulation results are quite promising showing fast convergence of the algorithm and good localization accuracy.

  13. A new method of automatic landmark tagging for shape model construction via local curvature scale

    NASA Astrophysics Data System (ADS)

    Rueda, Sylvia; Udupa, Jayaram K.; Bai, Li

    2008-03-01

    Segmentation of organs in medical images is a difficult task requiring very often the use of model-based approaches. To build the model, we need an annotated training set of shape examples with correspondences indicated among shapes. Manual positioning of landmarks is a tedious, time-consuming, and error prone task, and almost impossible in the 3D space. To overcome some of these drawbacks, we devised an automatic method based on the notion of c-scale, a new local scale concept. For each boundary element b, the arc length of the largest homogeneous curvature region connected to b is estimated as well as the orientation of the tangent at b. With this shape description method, we can automatically locate mathematical landmarks selected at different levels of detail. The method avoids the use of landmarks for the generation of the mean shape. The selection of landmarks on the mean shape is done automatically using the c-scale method. Then, these landmarks are propagated to each shape in the training set, defining this way the correspondences among the shapes. Altogether 12 strategies are described along these lines. The methods are evaluated on 40 MRI foot data sets, the object of interest being the talus bone. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. The approach is applicable to spaces of any dimensionality, although we have focused in this paper on 2D shapes.

  14. Approach for the Semi-Automatic Verification of 3d Building Models

    NASA Astrophysics Data System (ADS)

    Helmholz, P.; Belton, D.; Moncrieff, S.

    2013-04-01

    In the field of spatial sciences, there are a large number of disciplines and techniques for capturing data to solve a variety of different tasks and problems for different applications. Examples include: traditional survey for boundary definitions, aerial imagery for building models, and laser scanning for heritage facades. These techniques have different attributes such as the number of dimensions, accuracy and precision, and the format of the data. However, because of the number of applications and jobs, often over time these data sets captured from different sensor platforms and for different purposes will overlap in some way. In most cases, while this data is archived, it is not used in future applications to value add to the data capture campaign of current projects. It is also the case that newly acquire data are often not used to combine and improve existing models and data integrity. The purpose of this paper is to discuss a methodology and infrastructure to automatically support this concept. That is, based on a job specification, to automatically query existing and newly acquired data based on temporal and spatial relations, and to automatically combine and generate the best solution. To this end, there are three main challenges to examine; change detection, thematic accuracy and data matching.

  15. Automatic classification of sentences to support Evidence Based Medicine.

    PubMed

    Kim, Su Nam; Martinez, David; Cavedon, Lawrence; Yencken, Lars

    2011-03-29

    Given a set of pre-defined medical categories used in Evidence Based Medicine, we aim to automatically annotate sentences in medical abstracts with these labels. We constructed a corpus of 1,000 medical abstracts annotated by hand with specified medical categories (e.g. Intervention, Outcome). We explored the use of various features based on lexical, semantic, structural, and sequential information in the data, using Conditional Random Fields (CRF) for classification. For the classification tasks over all labels, our systems achieved micro-averaged f-scores of 80.9% and 66.9% over datasets of structured and unstructured abstracts respectively, using sequential features. In labeling only the key sentences, our systems produced f-scores of 89.3% and 74.0% over structured and unstructured abstracts respectively, using the same sequential features. The results over an external dataset were lower (f-scores of 63.1% for all labels, and 83.8% for key sentences). Of the features we used, the best for classifying any given sentence in an abstract were based on unigrams, section headings, and sequential information from preceding sentences. These features resulted in improved performance over a simple bag-of-words approach, and outperformed feature sets used in previous work.

  16. A rule-based algorithm for automatic bond type perception.

    PubMed

    Zhang, Qian; Zhang, Wei; Li, Youyong; Wang, Junmei; Zhang, Liling; Hou, Tingjun

    2012-10-31

    Assigning bond orders is a necessary and essential step for characterizing a chemical structure correctly in force field based simulations. Several methods have been developed to do this. They all have advantages but with limitations too. Here, an automatic algorithm for assigning chemical connectivity and bond order regardless of hydrogen for organic molecules is provided, and only three dimensional coordinates and element identities are needed for our algorithm. The algorithm uses hard rules, length rules and conjugation rules to fix the structures. The hard rules determine bond orders based on the basic chemical rules; the length rules determine bond order by the length between two atoms based on a set of predefined values for different bond types; the conjugation rules determine bond orders by using the length information derived from the previous rule, the bond angles and some small structural patterns. The algorithm is extensively evaluated in three datasets, and achieves good accuracy of predictions for all the datasets. Finally, the limitation and future improvement of the algorithm are discussed.

  17. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    SciTech Connect

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  18. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline

    PubMed Central

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  19. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.

    PubMed

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases.

  20. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  1. Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.

    2015-08-01

    Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.

  2. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    SciTech Connect

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  3. Power-based Shift Schedule for Pure Electric Vehicle with a Two-speed Automatic Transmission

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqi; Liu, Yanfang; Liu, Qiang; Xu, Xiangyang

    2016-11-01

    This paper introduces a comprehensive shift schedule for a two-speed automatic transmission of pure electric vehicle. Considering about driving ability and efficiency performance of electric vehicles, the power-based shift schedule is proposed with three principles. This comprehensive shift schedule regards the vehicle current speed and motor load power as input parameters to satisfy the vehicle driving power demand with lowest energy consumption. A simulation model has been established to verify the dynamic and economic performance of comprehensive shift schedule. Compared with traditional dynamic and economic shift schedules, simulation results indicate that the power-based shift schedule is superior to traditional shift schedules.

  4. Automatic ground-based station for vicarious calibration

    NASA Astrophysics Data System (ADS)

    Schmechtig, Catherine; Santer, Richard P.; Roger, Jean-Claude; Meygret, Aime

    1997-12-01

    Vicarious calibration generally requires field activities in order to characterize surface reflectances and atmosphere to unable use the prediction of the radiance at the satellite level. To limit human presence on the field, an automatic ground-based station was defined as well as the required protocol to achieve satellite vicarious calibration. The solar irradiance measurements are self calibrated using the Langley technique. The instrument was designed so that, firstly, the same gun measures both the solar irradiance and the radiance (sky or ground) and, secondly, that the field of view is constant over the spectral range. These two conditions offer an intercalibration opportunity between radiance and irradiance as well as the field of view is well defined. Experimental determination of the field of view is possible in UV region based on the Rayleigh scattering. We, then, describe how to derive the TOA signal from measurements. Two approaches have been developed according the energetic characteristics we want to estimate (reflectance or radiance). Preliminary results of a field campaign in June 1997 are reported.

  5. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  6. Automatic classification for pathological prostate images based on fractal analysis.

    PubMed

    Huang, Po-Whei; Lee, Cheng-Hsiung

    2009-07-01

    Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images.

  7. Automatic target validation based on neuroscientific literature mining for tractography.

    PubMed

    Vasques, Xavier; Richardet, Renaud; Hill, Sean L; Slater, David; Chappelier, Jean-Cedric; Pralong, Etienne; Bloch, Jocelyne; Draganski, Bogdan; Cif, Laura

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/.

  8. Automatic target validation based on neuroscientific literature mining for tractography

    PubMed Central

    Vasques, Xavier; Richardet, Renaud; Hill, Sean L.; Slater, David; Chappelier, Jean-Cedric; Pralong, Etienne; Bloch, Jocelyne; Draganski, Bogdan; Cif, Laura

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/. PMID

  9. Neural Signatures of Controlled and Automatic Retrieval Processes in Memory-based Decision-making.

    PubMed

    Khader, Patrick H; Pachur, Thorsten; Weber, Lilian A E; Jost, Kerstin

    2016-01-01

    Decision-making often requires retrieval from memory. Drawing on the neural ACT-R theory [Anderson, J. R., Fincham, J. M., Qin, Y., & Stocco, A. A central circuit of the mind. Trends in Cognitive Sciences, 12, 136-143, 2008] and other neural models of memory, we delineated the neural signatures of two fundamental retrieval aspects during decision-making: automatic and controlled activation of memory representations. To disentangle these processes, we combined a paradigm developed to examine neural correlates of selective and sequential memory retrieval in decision-making with a manipulation of associative fan (i.e., the decision options were associated with one, two, or three attributes). The results show that both the automatic activation of all attributes associated with a decision option and the controlled sequential retrieval of specific attributes can be traced in material-specific brain areas. Moreover, the two facets of memory retrieval were associated with distinct activation patterns within the frontoparietal network: The dorsolateral prefrontal cortex was found to reflect increasing retrieval effort during both automatic and controlled activation of attributes. In contrast, the superior parietal cortex only responded to controlled retrieval, arguably reflecting the sequential updating of attribute information in working memory. This dissociation in activation pattern is consistent with ACT-R and constitutes an important step toward a neural model of the retrieval dynamics involved in memory-based decision-making.

  10. Automatic QSAR modeling of ADME properties: blood-brain barrier penetration and aqueous solubility.

    PubMed

    Obrezanova, Olga; Gola, Joelle M R; Champness, Edmund J; Segall, Matthew D

    2008-01-01

    In this article, we present an automatic model generation process for building QSAR models using Gaussian Processes, a powerful machine learning modeling method. We describe the stages of the process that ensure models are built and validated within a rigorous framework: descriptor calculation, splitting data into training, validation and test sets, descriptor filtering, application of modeling techniques and selection of the best model. We apply this automatic process to data sets of blood-brain barrier penetration and aqueous solubility and compare the resulting automatically generated models with 'manually' built models using external test sets. The results demonstrate the effectiveness of the automatic model generation process for two types of data sets commonly encountered in building ADME QSAR models, a small set of in vivo data and a large set of physico-chemical data.

  11. Automatic Mapping of Martian Landforms Using Segmentation-based Classification

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Stepinski, T. F.; Vilalta, R.

    2007-03-01

    We use terrain segmentation and classification techniques to automatically map landforms on Mars. The method is applied to six sites to obtain geomorphic maps geared toward rapid characterization of impact craters.

  12. Acoustical model of small calibre ballistic shock waves in air for automatic sniper localization applications

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    The phenomenon of ballistic shock wave emission by a small calibre projectile at supersonic speed is quite relevant in automatic sniper localization applications. When available, ballistic shock wave analysis makes possible the estimation of the main ballistic features of a gunfire event. The propagation of ballistic shock waves in air is a process which mainly involves nonlinear distortion, or steepening, and atmospheric absorption. Current ballistic shock waves propagation models used in automatic sniper localization systems only consider nonlinear distortion effects. This means that only the rates of change of shock peak pressure and the N-wave duration with distance are considered in the determination of the miss distance. In the present paper we present an improved acoustical model of small calibre ballistic shock wave propagation in air, intended to be used in acoustics-based automatic sniper localization applications. In our approach, we have considered nonlinear distortion, but additionally we have also introduced the effects of atmospheric sound absorption. Atmospheric absorption is implemented in the time domain in order to get faster calculation times than those computed in frequency domain. Furthermore, we take advantage of the fact that atmospheric absorption plays a fundamental role in the rise times of the shocks, and introduce the rate of change of the rise time with distance as a third parameter to be used in the determination of the miss distance. This lead us to a more accurate and robust estimation of the miss distance, and consequently of the projectile trajectory, and the spatial coordinates of the gunshot origin.

  13. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  14. Automatic Cell Phone Menu Customization Based on User Operation History

    NASA Astrophysics Data System (ADS)

    Fukazawa, Yusuke; Hara, Mirai; Ueno, Hidetoshi

    Mobile devices are becoming more and more difficult to use due to the sheer number of functions now supported. In this paper, we propose a menu customization system that ranks functions so as to make interesting functions including both frequently used and functions that are infrequently used but have the potential to satisfy the user, easy to access. Concretely, we define the features of the phone's functions by extracting keywords from the manufacturer's manual, and propose a method that uses the Ranking SVM (Support Vector Machine) to rank the functions based on user's operation history. We conduct a home-use test for one week to evaluate the efficiency of customization and the usability of menu customization. The results of this test show that the average rank at the last day was half that of the first day, and that the user could find, on average, 3.14 more kinds of new functions, ones that the user did not know about before the test, on a daily basis. This shows that the proposed customized menu supports the user by making it easier to access frequent items and to find new interesting functions. From interviews, almost 70% of the users were satisfied with the ranking provided by menu customization as well as the usability of the resulting menus. In addition, interviews show that automatic cell phone menu customization is more appropriate for mobile phone beginners than expert users.

  15. Automatic aeroponic irrigation system based on Arduino’s platform

    NASA Astrophysics Data System (ADS)

    Montoya, A. P.; Obando, F. A.; Morales, J. G.; Vargas, G.

    2017-06-01

    The recirculating hydroponic culture techniques, as aeroponics, has several advantages over traditional agriculture, aimed to improve the efficiently and environmental impact of agriculture. These techniques require continuous monitoring and automation for proper operation. In this work was developed an automatic monitored aeroponic-irrigation system based on the Arduino’s free software platform. Analog and digital sensors for measuring the temperature, flow and level of a nutrient solution in a real greenhouse were implemented. In addition, the pH and electric conductivity of nutritive solutions are monitored using the Arduino’s differential configuration. The sensor network, the acquisition and automation system are managed by two Arduinos modules in master-slave configuration, which communicate one each other wireless by Wi-Fi. Further, data are stored in micro SD memories and the information is loaded on a web page in real time. The developed device brings important agronomic information when is tested with an arugula culture (Eruca sativa Mill). The system also could be employ as an early warning system to prevent irrigation malfunctions.

  16. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  17. An automatic target recognition system based on SAR image

    NASA Astrophysics Data System (ADS)

    Li, Qinfu; Wang, Jinquan; Zhao, Bo; Luo, Furen; Xu, Xiaojian

    2009-10-01

    In this paper, an automatic target recognition (ATR) system based on synthetic aperture radar (SAR) is proposed. This ATR system can play an important role in the simulation of up-to-data battlefield environment and be used in ATR research. To establish an integral and available system, the processing of SAR image was divided into four main stages which are de-noise, detection, cluster-discrimination and segment-recognition, respectively. The first three stages are used for searching region of interest (ROI). Once the ROIs are extracted, the recognition stage will be taken to compute the similarity between the ROIs and the templates in the electromagnetic simulation software National Electromagnetic Scattering Code (NESC). Due to the lack of the SAR raw data, the electromagnetic simulated images are added to the measured SAR background to simulate the battlefield environment8. The purpose of the system is to find the ROIs which can be the artificial military targets such as tanks, armored cars and so on and to categorize the ROIs into the right classes according to the existing templates. From the results we can see that the proposed system achieves a satisfactory result.

  18. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  19. Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models

    PubMed Central

    Rojas Q., Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions. PMID:21858069

  20. Modeling and Automatic Feedback Control of Tremor: Adaptive Estimation of Deep Brain Stimulation

    PubMed Central

    Rehan, Muhammad; Hong, Keum-Shik

    2013-01-01

    This paper discusses modeling and automatic feedback control of (postural and rest) tremor for adaptive-control-methodology-based estimation of deep brain stimulation (DBS) parameters. The simplest linear oscillator-based tremor model, between stimulation amplitude and tremor, is investigated by utilizing input-output knowledge. Further, a nonlinear generalization of the oscillator-based tremor model, useful for derivation of a control strategy involving incorporation of parametric-bound knowledge, is provided. Using the Lyapunov method, a robust adaptive output feedback control law, based on measurement of the tremor signal from the fingers of a patient, is formulated to estimate the stimulation amplitude required to control the tremor. By means of the proposed control strategy, an algorithm is developed for estimation of DBS parameters such as amplitude, frequency and pulse width, which provides a framework for development of an automatic clinical device for control of motor symptoms. The DBS parameter estimation results for the proposed control scheme are verified through numerical simulations. PMID:23638163

  1. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  2. Simulator training to automaticity leads to improved skill transfer compared with traditional proficiency-based training: a randomized controlled trial.

    PubMed

    Stefanidis, Dimitrios; Scerbo, Mark W; Montero, Paul N; Acker, Christina E; Smith, Warren D

    2012-01-01

    We hypothesized that novices will perform better in the operating room after simulator training to automaticity compared with traditional proficiency based training (current standard training paradigm). Simulator-acquired skill translates to the operating room, but the skill transfer is incomplete. Secondary task metrics reflect the ability of trainees to multitask (automaticity) and may improve performance assessment on simulators and skill transfer by indicating when learning is complete. Novices (N = 30) were enrolled in an IRB-approved, blinded, randomized, controlled trial. Participants were randomized into an intervention (n = 20) and a control (n = 10) group. The intervention group practiced on the FLS suturing task until they achieved expert levels of time and errors (proficiency), were tested on a live porcine fundoplication model, continued simulator training until they achieved expert levels on a visual spatial secondary task (automaticity) and were retested on the operating room (OR) model. The control group participated only during testing sessions. Performance scores were compared within and between groups during testing sessions. : Intervention group participants achieved proficiency after 54 ± 14 and automaticity after additional 109 ± 57 repetitions. Participants achieved better scores in the OR after automaticity training [345 (range, 0-537)] compared with after proficiency-based training [220 (range, 0-452; P < 0.001]. Simulator training to automaticity takes more time but is superior to proficiency-based training, as it leads to improved skill acquisition and transfer. Secondary task metrics that reflect trainee automaticity should be implemented during simulator training to improve learning and skill transfer.

  3. Fully automatic vertebra detection in x-ray images based on multi-class SVM

    NASA Astrophysics Data System (ADS)

    Lecron, Fabian; Benjelloun, Mohammed; Mahmoudi, Saïd

    2012-02-01

    Automatically detecting vertebral bodies in X-Ray images is a very complex task, especially because of the noise and the low contrast resulting in that kind of medical imagery modality. Therefore, the contributions in the literature are mainly interested in only 2 medical imagery modalities: Computed Tomography (CT) and Magnetic Resonance (MR). Few works are dedicated to the conventional X-Ray radiography and propose mostly semi-automatic methods. However, vertebra detection is a key step in many medical applications such as vertebra segmentation, vertebral morphometry, etc. In this work, we develop a fully automatic approach for the vertebra detection, based on a learning method. The idea is to detect a vertebra by its anterior corners without human intervention. To this end, the points of interest in the radiograph are firstly detected by an edge polygonal approximation. Then, a SIFT descriptor is used to train an SVM-model. Therefore, each point of interest can be classified in order to detect if it belongs to a vertebra or not. Our approach has been assessed by the detection of 250 cervical vertebræ on radiographs. The results show a very high precision with a corner detection rate of 90.4% and a vertebra detection rate from 81.6% to 86.5%.

  4. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    PubMed

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-02-16

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Mindfulness-Based Parent Training: Strategies to Lessen the Grip of Automaticity in Families with Disruptive Children

    ERIC Educational Resources Information Center

    Dumas, Jean E.

    2005-01-01

    Disagreements and conflicts in families with disruptive children often reflect rigid patterns of behavior that have become overlearned and automatized with repeated practice. These patterns are mindless: They are performed with little or no awareness and are highly resistant to change. This article introduces a new, mindfulness-based model of…

  6. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  7. Automatic Recommendations for E-Learning Personalization Based on Web Usage Mining Techniques and Information Retrieval

    ERIC Educational Resources Information Center

    Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa

    2009-01-01

    In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…

  8. Profiling School Shooters: Automatic Text-Based Analysis

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L.

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters’ texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  9. Automatic calibration of dial gauges based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Feng, Haiping; Kong, Ming

    2008-10-01

    Against the image characteristics of dial gauges, an automatic detection system of dial gauges is designed and implemented by using the technology of computer vision technology and digital image processing methods. Improved image subtraction method and adaptive threshold segmentation method is used for previous processing; a new method named as region-segmentation is proposed to partition the dial image, only the useful blocks of the dial image is processed no the other area, this method reduces the computation amount greatly, and improves the processing speed effectively. This method has been applied in the automatic detection system of dial gauges, which makes it possible for the detection of dial gauges to be finished intelligent, automatically and rapidly.

  10. Model development for automatic guidance of a VTOL aircraft to a small aviation ship

    NASA Technical Reports Server (NTRS)

    Goka, T.; Sorensen, J. A.; Schmidt, S. F.; Paulk, C. H., Jr.

    1980-01-01

    This paper describes a detailed mathematical model which has been assembled to study automatic approach and landing guidance concepts to bring a VTOL aircraft onto a small aviation ship. The model is used to formulate system simulations which in turn are used to evaluate different guidance concepts. Ship motion (Sea State 5), wind-over-deck turbulence, MLS-based navigation, implicit model following flight control, lift fan V/STOL aircraft, ship and aircraft instrumentation errors, various steering laws, and appropriate environmental and human factor constraints are included in the model. Results are given to demonstrate use of the model and simulation to evaluate performance of the flight system and to choose appropriate guidance techniques for further cockpit simulator study.

  11. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  12. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  13. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that

  14. Automatic intelligibility assessment of speakers after laryngeal cancer by means of acoustic modeling.

    PubMed

    Bocklet, Tobias; Riedhammer, Korbinian; Nöth, Elmar; Eysholdt, Ulrich; Haderlein, Tino

    2012-05-01

    One aspect of voice and speech evaluation after laryngeal cancer is acoustic analysis. Perceptual evaluation by expert raters is a standard in the clinical environment for global criteria such as overall quality or intelligibility. So far, automatic approaches evaluate acoustic properties of pathologic voices based on voiced/unvoiced distinction and fundamental frequency analysis of sustained vowels. Because of the high amount of noisy components and the increasing aperiodicity of highly pathologic voices, a fully automatic analysis of fundamental frequency is difficult. We introduce a purely data-driven system for the acoustic analysis of pathologic voices based on recordings of a standard text. Short-time segments of the speech signal are analyzed in the spectral domain, and speaker models based on this information are built. These speaker models act as a clustered representation of the acoustic properties of a person's voice and are thus characteristic for speakers with different kinds and degrees of pathologic conditions. The system is evaluated on two different data sets with speakers reading standardized texts. One data set contains 77 speakers after laryngeal cancer treated with partial removal of the larynx. The other data set contains 54 totally laryngectomized patients, equipped with a Provox shunt valve. Each speaker was rated by five expert listeners regarding three different criteria: strain, voice quality, and speech intelligibility. We show correlations for each data set with r and ρ≥0.8 between the automatic system and the mean value of the five raters. The interrater correlation of one rater to the mean value of the remaining raters is in the same range. We thus assume that for selected evaluation criteria, the system can serve as a validated objective support for acoustic voice and speech analysis. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  15. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning

    PubMed Central

    2017-01-01

    Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture. PMID:28757863

  16. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning.

    PubMed

    Wang, Guan; Sun, Yu; Wang, Jianxin

    2017-01-01

    Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  17. Thesaurus-Based Automatic Indexing: A Study of Indexing Failure.

    ERIC Educational Resources Information Center

    Caplan, Priscilla Louise

    This study examines automatic indexing performed with a manually constructed thesaurus on a document collection of titles and abstracts of library science master's papers. Errors are identified when the meaning of a posted descriptor, as identified by context in the thesaurus, does not match that of the passage of text which occasioned the…

  18. Thesaurus-Based Automatic Indexing: A Study of Indexing Failure.

    ERIC Educational Resources Information Center

    Caplan, Priscilla Louise

    This study examines automatic indexing performed with a manually constructed thesaurus on a document collection of titles and abstracts of library science master's papers. Errors are identified when the meaning of a posted descriptor, as identified by context in the thesaurus, does not match that of the passage of text which occasioned the…

  19. Automatic orientation and 3D modelling from markerless rock art imagery

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  20. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  1. Automatic Annotation of Spatial Expression Patterns via Sparse Bayesian Factor Models

    PubMed Central

    Pruteanu-Malinici, Iulian; Mace, Daniel L.; Ohler, Uwe

    2011-01-01

    Advances in reporters for gene expression have made it possible to document and quantify expression patterns in 2D–4D. In contrast to microarrays, which provide data for many genes but averaged and/or at low resolution, images reveal the high spatial dynamics of gene expression. Developing computational methods to compare, annotate, and model gene expression based on images is imperative, considering that available data are rapidly increasing. We have developed a sparse Bayesian factor analysis model in which the observed expression diversity of among a large set of high-dimensional images is modeled by a small number of hidden common factors. We apply this approach on embryonic expression patterns from a Drosophila RNA in situ image database, and show that the automatically inferred factors provide for a meaningful decomposition and represent common co-regulation or biological functions. The low-dimensional set of factor mixing weights is further used as features by a classifier to annotate expression patterns with functional categories. On human-curated annotations, our sparse approach reaches similar or better classification of expression patterns at different developmental stages, when compared to other automatic image annotation methods using thousands of hard-to-interpret features. Our study therefore outlines a general framework for large microscopy data sets, in which both the generative model itself, as well as its application for analysis tasks such as automated annotation, can provide insight into biological questions. PMID:21814502

  2. Validating Automatically Generated Students' Conceptual Models from Free-text Answers at the Level of Concepts

    NASA Astrophysics Data System (ADS)

    Pérez-Marín, Diana; Pascual-Nieto, Ismael; Rodríguez, Pilar; Anguiano, Eloy; Alfonseca, Enrique

    2008-11-01

    Students' conceptual models can be defined as networks of interconnected concepts, in which a confidence-value (CV) is estimated per each concept. This CV indicates how confident the system is that each student knows the concept according to how the student has used it in the free-text answers provided to an automatic free-text scoring system. In a previous work, a preliminary validation was done based on the global comparison between the score achieved by each student in the final exam and the score associated to his or her model (calculated as the average of the CVs of the concepts). 50% Pearson correlation statistically significant (p = 0.01) was reached. In order to complete those results, in this paper, the level of granularity has been lowered down to each particular concept. In fact, the correspondence between the human estimation of how well each concept of the conceptual model is known versus the computer estimation is calculated. 0.08 mean quadratic error between both values has been attained, which validates the automatically generated students' conceptual models at the concept level of granularity.

  3. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis

    DTIC Science & Technology

    1989-08-01

    Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17

  4. Feature relevance analysis supporting automatic motor imagery discrimination in EEG based BCI systems.

    PubMed

    Álvarez-Meza, A M; Velásquez-Martínez, L F; Castellanos-Dominguez, G

    2013-01-01

    Recently, there have been many efforts to develop Brain Computer Interface (BCI) systems, allowing identifying and discriminating brain activity, as well as, support the control of external devices, and to understand cognitive behaviors. In this work, a feature relevance analysis approach based on an eigen decomposition method is proposed to support automatic Motor Imagery (MI) discrimination in electroencephalography signals for BCI systems. We select a set of features representing the best as possible the studied process. For such purpose, a variability study is performed based on traditional Principal Component Analysis. EEG signals modelling is carried out by feature estimation of three frequency-based and one time-based. Our approach provides testing over a well-known MI dataset. Attained results show that presented algorithm can be used as tool to support discrimination of MI brain activity, obtaining acceptable results in comparison to state of the art approaches.

  5. Automatic construction of statistical shape models using deformable simplex meshes with vector field convolution energy.

    PubMed

    Wang, Jinke; Shi, Changfa

    2017-04-24

    In the active shape model framework, principal component analysis (PCA) based statistical shape models (SSMs) are widely employed to incorporate high-level a priori shape knowledge of the structure to be segmented to achieve robustness. A crucial component of building SSMs is to establish shape correspondence between all training shapes, which is a very challenging task, especially in three dimensions. We propose a novel mesh-to-volume registration based shape correspondence establishment method to improve the accuracy and reduce the computational cost. Specifically, we present a greedy algorithm based deformable simplex mesh that uses vector field convolution as the external energy. Furthermore, we develop an automatic shape initialization method by using a Gaussian mixture model based registration algorithm, to derive an initial shape that has high overlap with the object of interest, such that the deformable models can then evolve more locally. We apply the proposed deformable surface model to the application of femur statistical shape model construction to illustrate its accuracy and efficiency. Extensive experiments on ten femur CT scans show that the quality of the constructed femur shape models via the proposed method is much better than that of the classical spherical harmonics (SPHARM) method. Moreover, the proposed method achieves much higher computational efficiency than the SPHARM method. The experimental results suggest that our method can be employed for effective statistical shape model construction.

  6. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  7. Automatic Calibration of Hydrological Models in the Newly Reconstructed Catchments: Issues, Methods and Uncertainties

    NASA Astrophysics Data System (ADS)

    Nazemi, Alireza; Elshorbagy, Amin

    2010-05-01

    The use of optimisation methods has a long tradition in the calibration of conceptual hydrological models; nevertheless, most of the previous investigations have been made in the catchments with long period of data collection and only with respect to the runoff information. The present study focuses on the automatic calibration of hydrological models using the states (i.e. soil moisture) as well as the fluxes (i.e., AET) in a prototype catchment, in which intensive gauging network collects variety of catchment variables; yet only a short period of data is available. First, the characteristics of such a calibration attempt are highlighted and discussed and a number of research questions are proposed. Then, four different optimisation methods, i.e. Latin Hypercube Sampling, Shuffled Complex Evolution Metropolis, Multi-Objective Shuffled Complex Evolution Metropolis and Non-dominated Sort Genetic Algorithm II, have been considered and applied for the automatic calibration of the GSDW model in a newly oil-sand reconstructed catchment in northern Alberta, Canada. It is worthwhile to mention that the original GSDW model had to be translated into MATLAB in order to enable the model to be automatically calibrated. Different conceptualisation scenarios are generated and calibrated. The calibration results have been analysed and compared in terms of the optimality and the quality of solutions. The concepts of multi-objectivity and lack of identifiability are addressed in the calibration solutions and the best calibration algorithm is selected based on the error of representing the soil moisture content in different layers. The current study also considers uncertainties, which might occur in the formulation of calibration process by considering different calibration scenarios using the same model and dataset. The interactions among accuracy, identifiability, and the model parsimony are addressed and discussed. The present investigation concludes that the calibration of

  8. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  9. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  10. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data

    PubMed Central

    Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  11. Study of burn scar extraction automatically based on level set method using remote sensing data.

    PubMed

    Liu, Yang; Dai, Qin; Liu, Jianbo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model.

  12. Automatic script identification from images using cluster-based templates

    SciTech Connect

    Hochberg, J.; Kerns, L.; Kelly, P.; Thomas, T.

    1995-02-01

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

  13. Quantitative evaluation of background parenchymal enhancement (BPE) on breast MRI. A feasibility study with a semi-automatic and automatic software compared to observer-based scores

    PubMed Central

    Bignotti, Bianca; Tagliafico, Giulio; Tosto, Simona; Signori, Alessio; Calabrese, Massimo

    2015-01-01

    Objective: To evaluate quantitative measurements of background parenchymal enhancement (BPE) on breast MRI and compare them with observer-based scores. Methods: BPE of 48 patients (mean age: 48 years; age range: 36–66 years) referred to 3.0-T breast MRI between 2012 and 2014 was evaluated independently and blindly to each other by two radiologists. BPE was estimated qualitatively with the standard Breast Imaging Reporting and Data System (BI-RADS) scale and quantitatively with a semi-automatic and an automatic software interface. To assess intrareader agreement, MRIs were re-read after a 4-month interval by the same two readers. The Pearson correlation coefficient (r) and the Bland–Altman method were used to compare the methods used to estimate BPE. p-value <0.05 was considered significant. Results: The mean value of BPE with the semi-automatic software evaluated by each reader was 14% (range: 2–79%) for Reader 1 and 16% (range: 1–61%) for Reader 2 (p > 0.05). Mean values of BPE percentages for the automatic software were 17.5 ± 13.1 (p > 0.05 vs semi-automatic). The automatic software was unable to produce BPE values for 2 of 48 (4%) patients. With BI-RADS, interreader and intrareader values were κ = 0.70 [95% confidence interval (CI) 0.49–0.91] and κ = 0.69 (95% CI 0.46–0.93), respectively. With semi-automated software, interreader and intrareader values were κ = 0.81 (95% CI 0.59–0.99) and κ = 0.85 (95% CI 0.43–0.99), respectively. BI-RADS scores correlated with the automatic (r = 0.55, p < 0.001) and semi-automatic scores (r = 0.60, p < 0.001). Automatic scores correlated with the semi-automatic scores (r = 0.77, p < 0.001). The mean percentage difference between automatic and semi-automatic scores was 3.5% (95% CI 1.5–5.2). Conclusion: BPE quantitative evaluation is feasible with both semi-automatic and automatic software and correlates with radiologists' estimation. Advances in

  14. Quantitative evaluation of background parenchymal enhancement (BPE) on breast MRI. A feasibility study with a semi-automatic and automatic software compared to observer-based scores.

    PubMed

    Tagliafico, Alberto; Bignotti, Bianca; Tagliafico, Giulio; Tosto, Simona; Signori, Alessio; Calabrese, Massimo

    2015-01-01

    To evaluate quantitative measurements of background parenchymal enhancement (BPE) on breast MRI and compare them with observer-based scores. BPE of 48 patients (mean age: 48 years; age range: 36-66 years) referred to 3.0-T breast MRI between 2012 and 2014 was evaluated independently and blindly to each other by two radiologists. BPE was estimated qualitatively with the standard Breast Imaging Reporting and Data System (BI-RADS) scale and quantitatively with a semi-automatic and an automatic software interface. To assess intrareader agreement, MRIs were re-read after a 4-month interval by the same two readers. The Pearson correlation coefficient (r) and the Bland-Altman method were used to compare the methods used to estimate BPE. p-value <0.05 was considered significant. The mean value of BPE with the semi-automatic software evaluated by each reader was 14% (range: 2-79%) for Reader 1 and 16% (range: 1-61%) for Reader 2 (p > 0.05). Mean values of BPE percentages for the automatic software were 17.5 ± 13.1 (p > 0.05 vs semi-automatic). The automatic software was unable to produce BPE values for 2 of 48 (4%) patients. With BI-RADS, interreader and intrareader values were κ = 0.70 [95% confidence interval (CI) 0.49-0.91] and κ = 0.69 (95% CI 0.46-0.93), respectively. With semi-automated software, interreader and intrareader values were κ = 0.81 (95% CI 0.59-0.99) and κ = 0.85 (95% CI 0.43-0.99), respectively. BI-RADS scores correlated with the automatic (r = 0.55, p < 0.001) and semi-automatic scores (r = 0.60, p < 0.001). Automatic scores correlated with the semi-automatic scores (r = 0.77, p < 0.001). The mean percentage difference between automatic and semi-automatic scores was 3.5% (95% CI 1.5-5.2). BPE quantitative evaluation is feasible with both semi-automatic and automatic software and correlates with radiologists' estimation. Computerized BPE quantitative evaluation is feasible with both semi-automatic

  15. Automatic Parallelization Using OpenMP Based on STL Semantics

    SciTech Connect

    Liao, C; Quinlan, D J; Willcock, J J; Panas, T

    2008-06-03

    Automatic parallelization of sequential applications using OpenMP as a target has been attracting significant attention recently because of the popularity of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high level abstractions such as STL containers are largely ignored due to the lack of research compilers that are readily able to recognize high level object-oriented abstractions of STL. In this paper, we use ROSE, a multiple-language source-to-source compiler infrastructure, to build a parallelizer that can recognize such high level semantics and parallelize C++ applications using certain STL containers. The idea of our work is to automatically insert OpenMP constructs using extended conventional dependence analysis and the known domain-specific semantics of high-level abstractions with optional assistance from source code annotations. In addition, the parallelizer is followed by an OpenMP translator to translate the generated OpenMP programs into multi-threaded code targeted to a popular OpenMP runtime library. Our work extends the applicability of automatic parallelization and provides another way to take advantage of multicore processors.

  16. Automatic vertebral identification using surface-based registration

    NASA Astrophysics Data System (ADS)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  17. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  18. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    NASA Astrophysics Data System (ADS)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  19. System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator

    DTIC Science & Technology

    2006-08-01

    System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator Jae-Jun Kim∗ and Brij N. Agrawal † Department of...TITLE AND SUBTITLE System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator 5a. CONTRACT NUMBER 5b...and Dynamics, Vol. 20, No. 4, July-August 1997, pp. 625-632. 6Schwartz, J. L. and Hall, C. D., “ System Identification of a Spherical Air-Bearing

  20. Automatic Sleep Scoring Based on Modular Rule-Based Reasoning Units and Signal Processing Units

    DTIC Science & Technology

    2007-11-02

    scoring, rule-based reasoning, multi-staged I. INTRODUCTION Integrated analysis on the state of sleep through Polysomnography is crucial for...diagnosis for sleep related disease. But conventional analog-type Polysomnography systems need tremendous amount of papers and much labor of trained expert...In this sense to equip digital Polysomnography and its following automatic analysis system became trend. In the sleep analysis, sleep stage scoring is

  1. Modeling and Prototyping of Automatic Clutch System for Light Vehicles

    NASA Astrophysics Data System (ADS)

    Murali, S.; Jothi Prakash, V. M.; Vishal, S.

    2017-03-01

    Nowadays, recycling or regenerating the waste in to something useful is appreciated all around the globe. It reduces greenhouse gas emissions that contribute to global climate change. This study deals with provision of the automatic clutch mechanism in vehicles to facilitate the smooth changing of gears. This study proposed to use the exhaust gases which are normally expelled out as a waste from the turbocharger to actuate the clutch mechanism in vehicles to facilitate the smooth changing of gears. At present, clutches are operated automatically by using an air compressor in the four wheelers. In this study, a conceptual design is proposed in which the clutch is operated by the exhaust gas from the turbocharger and this will remove the usage of air compressor in the existing system. With this system, usage of air compressor is eliminated and the riders need not to operate the clutch manually. This work involved in development, analysation and validation of the conceptual design through simulation software. Then the developed conceptual design of an automatic pneumatic clutch system is tested with proto type.

  2. An image-based automatic mesh generation and numerical simulation for a population-based analysis of aerosol delivery in the human lungs

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2013-11-01

    The authors propose a method to automatically generate three-dimensional subject-specific airway geometries and meshes for computational fluid dynamics (CFD) studies of aerosol delivery in the human lungs. The proposed method automatically expands computed tomography (CT)-based airway skeleton to generate the centerline (CL)-based model, and then fits it to the CT-segmented geometry to generate the hybrid CL-CT-based model. To produce a turbulent laryngeal jet known to affect aerosol transport, we developed a physiologically-consistent laryngeal model that can be attached to the trachea of the above models. We used Gmsh to automatically generate the mesh for the above models. To assess the quality of the models, we compared the regional aerosol distributions in a human lung predicted by the hybrid model and the manually generated CT-based model. The aerosol distribution predicted by the hybrid model was consistent with the prediction by the CT-based model. We applied the hybrid model to 8 healthy and 16 severe asthmatic subjects, and average geometric error was 3.8% of the branch radius. The proposed method can be potentially applied to the branch-by-branch analyses of a large population of healthy and diseased lungs. NIH Grants R01-HL-094315 and S10-RR-022421, CT data provided by SARP, and computer time provided by XSEDE.

  3. Migration Based Event Detection and Automatic P- and S-Phase Picking in Hengill, Southwest Iceland

    NASA Astrophysics Data System (ADS)

    Wagner, F.; Tryggvason, A.; Gudmundsson, O.; Roberts, R.; Bodvarsson, R.; Fehler, M.

    2015-12-01

    Automatic detection of seismic events is a complicated process. Common procedures depend on the detection of seismic phases (e.g. P and S) in single trace analyses and their correct association with locatable point sources. The event detection threshold is thus directly related to the single trace detection threshold. Highly sensitive phase detectors detect low signal-to-noise ratio (S/N) phases but also produce a low percentage of locatable events. Short inter-event times of only a few seconds, which is not uncommon during seismic or volcanic crises, is a complication to any event association algorithm. We present an event detection algorithm based on seismic migration of trace attributes into an a-priori three-dimensional (3D) velocity model. We evaluate its capacity as automatic detector compared to conventional methods. Detecting events using seismic migration removes the need for phase association. The event detector runs on a stack of time shifted traces, which increases S/N and thus allows for a low detection threshold. Detected events come with an origin time and a location estimate, enabling a focused trace analysis, including P- and S-phase recognition, to discard false detections and build a basis for accurate automatic phase picking. We apply the migration based detection algorithm to data from a semi-permanent seismic network at Hengill, an active volcanic region with several geothermal production sites in southwest Iceland. The network includes 26 stations with inter-station distances down to 5 km. Results show a high success rate compared to the manually picked catalogue (up to 90% detected). New detections, that were missed by the standard detection routine, show a generally good ratio of true to false alarms, i.e. most of the new events are locatable.

  4. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    PubMed

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. In-Vivo Automatic Nuclear Cataract Detection and Classification in an Animal Model by Ultrasounds.

    PubMed

    Caixinha, Miguel; Amaro, Joao; Santos, Mario; Perdigao, Fernando; Gomes, Marco; Santos, Jaime

    2016-11-01

    To early detect nuclear cataract in vivo and automatically classify its severity degree, based on the ultrasound technique, using machine learning. A 20-MHz ophthalmic ultrasound probe with a focal length of 8.9 mm and an active diameter of 3 mm was used. Twenty-seven features in time and frequency domain were extracted for cataract detection and classification with support vector machine (SVM), Bayes, multilayer perceptron, and random forest classifiers. Fifty rats were used: 14 as control and 36 as study group. An animal model for nuclear cataract was developed. Twelve rats with incipient, 13 with moderate, and 11 with severe cataract were obtained. The hardness of the nucleus and the cortex regions was objectively measured in 12 rats using the NanoTest. Velocity, attenuation, and frequency downshift significantly increased with cataract formation ( ). The SVM classifier showed the higher performance for the automatic classification of cataract severity, with a precision, sensitivity, and specificity of 99.7% (relative absolute error of 0.4%). A statistically significant difference was found for the hardness of the different cataract degrees ( P = 0.016). The nucleus showed a higher hardness increase with cataract formation ( P = 0.049 ). A moderate-to-good correlation between the features and the nucleus hardness was found in 23 out of the 27 features. The developed methodology made possible detecting the nuclear cataract in-vivo in early stages, classifying automatically its severity degree and estimating its hardness. Based on this work, a medical prototype will be developed for early cataract detection, classification, and hardness estimation.

  6. Electroporation-based treatment planning for deep-seated tumors based on automatic liver segmentation of MRI images.

    PubMed

    Pavliha, Denis; Mušič, Maja M; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required.

  7. Template-based automatic extraction of the joint space of foot bones from CT scan

    NASA Astrophysics Data System (ADS)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  8. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin

    2012-03-01

    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  9. Automatic ultrasonic breast lesions detection using support vector machine based algorithm

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuang; Miao, Shan-Jung; Fan, Wei-Che; Chen, Yung-Sheng

    2007-03-01

    It is difficult to automatically detect tumors and extract lesion boundaries in ultrasound images due to the variance in shape, the interference from speckle noise, and the low contrast between objects and background. The enhancement of ultrasonic image becomes a significant task before performing lesion classification, which was usually done with manual delineation of the tumor boundaries in the previous works. In this study, a linear support vector machine (SVM) based algorithm is proposed for ultrasound breast image training and classification. Then a disk expansion algorithm is applied for automatically detecting lesions boundary. A set of sub-images including smooth and irregular boundaries in tumor objects and those in speckle-noised background are trained by the SVM algorithm to produce an optimal classification function. Based on this classification model, each pixel within an ultrasound image is classified into either object or background oriented pixel. This enhanced binary image can highlight the object and suppress the speckle noise; and it can be regarded as degraded paint character (DPC) image containing closure noise, which is well known in perceptual organization of psychology. An effective scheme of removing closure noise using iterative disk expansion method has been successfully demonstrated in our previous works. The boundary detection of ultrasonic breast lesions can be further equivalent to the removal of speckle noise. By applying the disk expansion method to the binary image, we can obtain a significant radius-based image where the radius for each pixel represents the corresponding disk covering the specific object information. Finally, a signal transmission process is used for searching the complete breast lesion region and thus the desired lesion boundary can be effectively and automatically determined. Our algorithm can be performed iteratively until all desired objects are detected. Simulations and clinical images were introduced to

  10. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  11. One-Day Offset between Simulated and Observed Daily Hydrographs: An Exploration of the Issue in Automatic Model Calibration

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Leon, L.; Yang, W.

    2014-12-01

    The literature of hydrologic modelling shows that in daily simulation of the rainfall-runoff relationship, the simulated hydrograph response to some rainfall events happens one day earlier than the observed one. This one-day offset issue results in significant residuals between the simulated and observed hydrographs and adversely impacts the model performance metrics that are based on the aggregation of daily residuals. Based on the analysis of sub-daily rainfall and runoff data sets in this study, the one-day offset issue appears to be inevitable when the same time interval, e.g. the calendar day, is used to measure daily rainfall and runoff data sets. This is an error introduced through data aggregation and needs to be properly addressed before calculating the model performance metrics. Otherwise, the metrics would not represent the modelling quality and could mislead the automatic calibration of the model. In this study, an algorithm is developed to scan the simulated hydrograph against the observed one, automatically detect all one-day offset incidents and shift the simulated hydrograph of those incidents one day forward before calculating the performance metrics. This algorithm is employed in the automatic calibration of the Soil and Water Assessment Tool that is set up for the Rouge River watershed in Southern Ontario, Canada. Results show that with the proposed algorithm, the automatic calibration to maximize the daily Nash-Sutcliffe (NS) identifies a solution that accurately estimates the magnitude of peak flow rates and the shape of rising and falling limbs of the observed hydrographs. But, without the proposed algorithm, the same automatic calibration finds a solution that systematically underestimates the peak flow rates in order to perfectly match the timing of simulated and observed peak flows.

  12. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  13. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

    PubMed Central

    Guo, Yu; Feng, Yuanming; Sun, Jian; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum. PMID:24987451

  14. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    PubMed

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  15. Implementation of a microcontroller-based semi-automatic coagulator.

    PubMed

    Chan, K; Kirumira, A; Elkateeb, A

    2001-01-01

    The coagulator is an instrument used in hospitals to detect clot formation as a function of time. Generally, these coagulators are very expensive and therefore not affordable by a doctors' office and small clinics. The objective of this project is to design and implement a low cost semi-automatic coagulator (SAC) prototype. The SAC is capable of assaying up to 12 samples and can perform the following tests: prothrombin time (PT), activated partial thromboplastin time (APTT), and PT/APTT combination. The prototype has been tested successfully.

  16. Deep Learning-Based Large-Scale Automatic Satellite Crosswalk Classification

    NASA Astrophysics Data System (ADS)

    Berriel, Rodrigo F.; Lopes, Andre Teixeira; de Souza, Alberto F.; Oliveira-Santos, Thiago

    2017-09-01

    High-resolution satellite imagery have been increasingly used on remote sensing classification problems. One of the main factors is the availability of this kind of data. Even though, very little effort has been placed on the zebra crossing classification problem. In this letter, crowdsourcing systems are exploited in order to enable the automatic acquisition and annotation of a large-scale satellite imagery database for crosswalks related tasks. Then, this dataset is used to train deep-learning-based models in order to accurately classify satellite images that contains or not zebra crossings. A novel dataset with more than 240,000 images from 3 continents, 9 countries and more than 20 cities was used in the experiments. Experimental results showed that freely available crowdsourcing data can be used to accurately (97.11%) train robust models to perform crosswalk classification on a global scale.

  17. Evaluation of Automatic Atlas-Based Lymph Node Segmentation for Head-and-Neck Cancer

    SciTech Connect

    Stapleford, Liza J.; Lawson, Joshua D.; Perkins, Charles; Edelman, Scott; Davis, Lawrence

    2010-07-01

    Purpose: To evaluate if automatic atlas-based lymph node segmentation (LNS) improves efficiency and decreases inter-observer variability while maintaining accuracy. Methods and Materials: Five physicians with head-and-neck IMRT experience used computed tomography (CT) data from 5 patients to create bilateral neck clinical target volumes covering specified nodal levels. A second contour set was automatically generated using a commercially available atlas. Physicians modified the automatic contours to make them acceptable for treatment planning. To assess contour variability, the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was used to take collections of contours and calculate a probabilistic estimate of the 'true' segmentation. Differences between the manual, automatic, and automatic-modified (AM) contours were analyzed using multiple metrics. Results: Compared with the 'true' segmentation created from manual contours, the automatic contours had a high degree of accuracy, with sensitivity, Dice similarity coefficient, and mean/max surface disagreement values comparable to the average manual contour (86%, 76%, 3.3/17.4 mm automatic vs. 73%, 79%, 2.8/17 mm manual). The AM group was more consistent than the manual group for multiple metrics, most notably reducing the range of contour volume (106-430 mL manual vs. 176-347 mL AM) and percent false positivity (1-37% manual vs. 1-7% AM). Average contouring time savings with the automatic segmentation was 11.5 min per patient, a 35% reduction. Conclusions: Using the STAPLE algorithm to generate 'true' contours from multiple physician contours, we demonstrated that, in comparison with manual segmentation, atlas-based automatic LNS for head-and-neck cancer is accurate, efficient, and reduces interobserver variability.

  18. Wireless sensor network-based greenhouse environment monitoring and automatic control system for dew condensation prevention.

    PubMed

    Park, Dae-Heon; Park, Jang-Woo

    2011-01-01

    Dew condensation on the leaf surface of greenhouse crops can promote diseases caused by fungus and bacteria, affecting the growth of the crops. In this paper, we present a WSN (Wireless Sensor Network)-based automatic monitoring system to prevent dew condensation in a greenhouse environment. The system is composed of sensor nodes for collecting data, base nodes for processing collected data, relay nodes for driving devices for adjusting the environment inside greenhouse and an environment server for data storage and processing. Using the Barenbrug formula for calculating the dew point on the leaves, this system is realized to prevent dew condensation phenomena on the crop's surface acting as an important element for prevention of diseases infections. We also constructed a physical model resembling the typical greenhouse in order to verify the performance of our system with regard to dew condensation control.

  19. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  20. Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Shen, Yuzhong

    2011-03-01

    3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.

  1. Automatic generation of dynamic 3D models for medical segmentation tasks

    NASA Astrophysics Data System (ADS)

    Dornheim, Lars; Dornheim, Jana; Tönnies, Klaus D.

    2006-03-01

    Models of geometry or appearance of three-dimensional objects may be used for locating and specifying object instances in 3D image data. Such models are necessary for segmentation if the object to be segmented is not separable based on image information only. They provide a-priori knowledge about the expected shape of the target structure. The success of such a segmentation task depends on the incorporated model knowledge. We present an automatic method to generate such a model for a given target structure. This knowledge is created in the form of a 3D Stable Mass-Spring Model (SMSM) and can be computed from a single sample segmentation. The model is built from different image features using a bottom-up strategy, which allows for different levels of model abstraction. We show the adequacy of the generated models in two practical medical applications: the anatomical segmentation of the left ventricle in myocardial perfusion SPECT, and the segmentation of the thyroid cartilage of the larynx in CT datasets. In both cases, the model generation was performed in a few seconds.

  2. Why discourse structures in medical reports matter for the validity of automatically generated text knowledge bases.

    PubMed

    Hahn, U; Romacker, M; Schulz, S

    1998-01-01

    The automatic analysis of medical full-texts currently suffers from neglecting text coherence phenomena such as reference relations between discourse units. This has unwarranted effects on the description adequacy of medical knowledge bases automatically generated from texts. The resulting representation bias can be characterized in terms of artificially fragmented, incomplete and invalid knowledge structures. We discuss three types of textual phenomena (pronominal and nominal anaphora, as well as textual ellipsis) and outline basic methodologies how to deal with them.

  3. Automatic identification of sources and trajectories of atmospheric Saharan dust aerosols with Latent Gaussian Models

    NASA Astrophysics Data System (ADS)

    Garbe, Christoph; Bachl, Fabian

    2013-04-01

    Dust transported from the Sahara across the ocean has a high impact on radiation fluxes and marine nutrient cycles. Significant progress has been made in characterising Saharan dust properties (Formenti et al., 2011) and its radiative effects through the 'SAharan Mineral dUst experiMent' (SAMUM) (Ansmann et al., 2011). While the models simulating Saharan dust transport processes have been considerably improved in recent years, it is still an open question which meteorological processes and surface characteristics are mainly responsible for dust transported to the Sub-Tropical Atlantic (Schepanski et al., 2009; Tegen et al., 2012). Currently, there exists a large discrepancy between modelled dust emission events and those observed from satellites. In this contribution we present an approach for classifying and tracking dust plumes based on a Bayesian hierarchical model. Recent developments in computational statistics known as Integrated Nested Laplace Approximations (INLA) have paved the way for efficient inference in a respective subclass, the Generalized Linear Model (GLM) (Rue et al., 2009). We present the results of our approach based on data from the SIVIRI instrument on board the Meteosat Second Generation (MSG) satellite. We demonstrate the accuracy for automatically detecting sources of dust and aerosol concentrations in the atmosphere. The trajectories of aerosols are also computed very efficiently. In our framework, we automatically identify optimal parameters for the computation of atmospheric aerosol motion. The applicability of our approach to a wide range of conditions will be discussed, as well as the ground truthing of our results and future directions in this field of research.

  4. Environmental monitoring based on automatic change detection from remotely sensed data: kernel-based approach

    NASA Astrophysics Data System (ADS)

    Shah-Hosseini, Reza; Homayouni, Saeid; Safari, Abdolreza

    2015-01-01

    In the event of a natural disaster, such as a flood or earthquake, using fast and efficient methods for estimating the extent of the damage is critical. Automatic change mapping and estimating are important in order to monitor environmental changes, e.g., deforestation. Traditional change detection (CD) approaches are time consuming, user dependent, and strongly influenced by noise and/or complex spectral classes in a region. Change maps obtained by these methods usually suffer from isolated changed pixels and have low accuracy. To deal with this, an automatic CD framework-which is based on the integration of change vector analysis (CVA) technique, kernel-based C-means clustering (KCMC), and kernel-based minimum distance (KBMD) classifier-is proposed. In parallel with the proposed algorithm, a support vector machine (SVM) CD method is presented and analyzed. In the first step, a differential image is generated via two approaches in high dimensional Hilbert space. Next, by using CVA and automatically determining a threshold, the pseudo-training samples of the change and no-change classes are extracted. These training samples are used for determining the initial value of KCMC parameters and training the SVM-based CD method. Then optimizing a cost function with the nature of geometrical and spectral similarity in the kernel space is employed in order to estimate the KCMC parameters and to select the precise training samples. These training samples are used to train the KBMD classifier. Last, the class label of each unknown pixel is determined using the KBMD classifier and SVM-based CD method. In order to evaluate the efficiency of the proposed algorithm for various remote sensing images and applications, two different datasets acquired by Quickbird and Landsat TM/ETM+ are used. The results show a good flexibility and effectiveness of this automatic CD method for environmental change monitoring. In addition, the comparative analysis of results from the proposed method

  5. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    NASA Astrophysics Data System (ADS)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.

    2012-04-01

    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold

  6. Fully Automatic Guidance and Control for Rotorcraft Nap-of-the-earth Flight Following Planned Profiles. Volume 2: Mathematical Model

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  7. Spectral phase-based automatic calibration scheme for swept source-based optical coherence tomography systems

    NASA Astrophysics Data System (ADS)

    Ratheesh, K. M.; Seah, L. K.; Murukeshan, V. M.

    2016-11-01

    The automatic calibration in Fourier-domain optical coherence tomography (FD-OCT) systems allows for high resolution imaging with precise depth ranging functionality in many complex imaging scenarios, such as microsurgery. However, the accuracy and speed of the existing automatic schemes are limited due to the functional approximations and iterative operations used in their procedures. In this paper, we present a new real-time automatic calibration scheme for swept source-based optical coherence tomography (SS-OCT) systems. The proposed automatic calibration can be performed during scanning operation and does not require an auxiliary interferometer for calibration signal generation and an additional channel for its acquisition. The proposed method makes use of the spectral component corresponding to the sample surface reflection as the calibration signal. The spectral phase function representing the non-linear sweeping characteristic of the frequency-swept laser source is determined from the calibration signal. The phase linearization with improved accuracy is achieved by normalization and rescaling of the obtained phase function. The fractional-time indices corresponding to the equidistantly spaced phase intervals are estimated directly from the resampling function and are used to resample the OCT signals. The proposed approach allows for precise calibration irrespective of the path length variation induced by the non-planar topography of the sample or galvo scanning. The conceived idea was illustrated using an in-house-developed SS-OCT system by considering the specular reflection from a mirror and other test samples. It was shown that the proposed method provides high-performance calibration in terms of axial resolution and sensitivity without increasing computational and hardware complexity.

  8. Automatic detection of avalanches in seismic data using Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Heck, Matthias; Hammer, Conny; van Herwijnen, Alec; Schweizer, Jürg; Fäh, Donat

    2017-04-01

    Seismic monitoring systems are well suited for the remote detection of mass movements, such as landslides, rockfalls and debris flows. For snow avalanches, this has been known since the 1970s and seismic monitoring could potentially provide valuable information for avalanche forecasting. We thus explored continuous seismic data from a string of vertical component geophones in an avalanche starting zone above Davos, Switzerland. The overall goal is to automatically detect avalanches with a Hidden Markov Model (HMM), a statistical pattern recognition tool widely used for speech recognition. A HMM uses a classifier to determine the likelihood that input objects belong to a finite number of classes. These classes are obtained by learning a multidimensional Gaussian mixture model representation of the overall observable feature space. This model is then used to derive the HMM parameters for avalanche waveforms using a single training sample to build the final classifier. We classified data from the winter seasons of 2010 and compared the results to several hundred avalanches manually identified in the seismic data. First results of a classification of a single day have shown, that the model is good in terms of probability of detection while having a relatively low false alarm rate. We further implemented a voting based classification approach to neglect events detected only by one sensor to further improve the model performance. For instance, on 22 March 2010, a day with particular high avalanche activity, 17 avalanches were positively identified by at least three sensors with no false alarms. These results show, that the automatic detection of avalanches in seismic data is feasible, bringing us one step closer to implementing seismic monitoring system in operational forecasting.

  9. Automatic P-S phase picking procedure based on Kurtosis: Vanuatu region case study

    NASA Astrophysics Data System (ADS)

    Baillard, C.; Crawford, W. C.; Ballu, V.; Hibert, C.

    2012-12-01

    Automatic P and S phase picking is indispensable for large seismological data sets. Robust algorithms, based on short term and long term average ratio comparison (Allen, 1982), are commonly used for event detection, but further improvements can be made in phase identification and picking. We present a picking scheme using consecutively Kurtosis-derived Characteristic Functions (CF) and Eigenvalue decompositions on 3-component seismic data to independently pick P and S arrivals. When computed over a sliding window of the signal, a sudden increase in the CF reveals a transition from a gaussian to a non-gaussian distribution, characterizing the phase onset (Saragiotis, 2002). One advantage of the method is that it requires much fewer adjustable parameters than competing methods. We modified the Kurtosis CF to improve pick precision, by computing the CF over several frequency bandwidths, window sizes and smoothing parameters. Once phases were picked, we determined the onset type (P or S) using polarization parameters (rectilinearity, azimuth and dip) calculated using Eigenvalue decompositions of the covariance matrix (Cichowicz, 1993). Finally, we removed bad picks using a clustering procedure and the signal-to-noise ratio (SNR). The pick quality index was also assigned based on the SNR value. Amplitude calculation is integrated into the procedure to enable automatic magnitude calculation. We applied this procedure to data from a network of 30 wideband seismometers (including 10 oceanic bottom seismometers) in Vanuatu that ran for 10 months from May 2008 to February 2009. We manually picked the first 172 events of June, whose local magnitudes range from 0.7 to 3.7. We made a total of 1601 picks, 1094 P and 507 S. We then applied our automatic picking to the same dataset. 70% of the manually picked onsets were picked automatically. For P-picks, the difference between manual and automatic picks is 0.01 ± 0.08 s overall; for the best quality picks (quality index 0: 64

  10. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution

    NASA Astrophysics Data System (ADS)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-01

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  11. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-21

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of [Formula: see text], yielding a mean Dice similarity coefficient of [Formula: see text], and an average symmetric surface distance of [Formula: see text] mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  12. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  13. Patch-based label fusion for automatic multi-atlas-based prostate segmentation in MR images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Jani, Ashesh B.; Rossi, Peter J.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    In this paper, we propose a 3D multi-atlas-based prostate segmentation method for MR images, which utilizes patch-based label fusion strategy. The atlases with the most similar appearance are selected to serve as the best subjects in the label fusion. A local patch-based atlas fusion is performed using voxel weighting based on anatomical signature. This segmentation technique was validated with a clinical study of 13 patients and its accuracy was assessed using the physicians' manual segmentations (gold standard). Dice volumetric overlapping was used to quantify the difference between the automatic and manual segmentation. In summary, we have developed a new prostate MR segmentation approach based on nonlocal patch-based label fusion, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

  14. Automatic detection and classification of sleep stages by multichannel EEG signal modeling.

    PubMed

    Zhovna, Inna; Shallom, Ilan D

    2008-01-01

    In this paper a novel method for automatic detection and classification of sleep stages using a multichannel electroencephalography (EEG) is presented. Understanding the sleep mechanism is vital for diagnosis and treatment of sleep disorders. The EEG is one of the most important tools of studying and diagnosing sleep disorders. EEG signals waveforms activity interpretation is performed by visual analysis (a very difficult procedure). This research aim is to ease the difficulties involved in the existing manual process of EEG interpretation by proposing an automatic sleep stage detection and classification system. The suggested method based on Multichannel Auto Regressive (MAR) model. The multichannel analysis approach incorporates the cross correlation information existing between different EEG signals. In the training phase, we used the vector quantization (VQ) algorithm, Linde-Buzo-Gray (LBG) and sleep stage definition, by estimation of probability mass functions (pmf) per every sleep stage using Generalized Log Likelihood Ratio (GLLR) distortion. The classification phase was performed using Kullback-Leibler (KL) divergence. The results of this research are promising with classification accuracy rate of 93.2%. The results encourage continuation of this research in the sleep field and in other biomedical signals applications.

  15. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  16. Mapping of Planetary Surface Age Based on Crater Statistics Obtained by AN Automatic Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Salih, A. L.; Mühlbauer, M.; Grumpe, A.; Pasckert, J. H.; Wöhler, C.; Hiesinger, H.

    2016-06-01

    The analysis of the impact crater size-frequency distribution (CSFD) is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly) global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to 2.9 Ga) and higher

  17. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  18. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  19. Detection and classification of football players with automatic generation of models

    NASA Astrophysics Data System (ADS)

    Gómez, Jorge R.; Jaraba, Elias Herrero; Montañés, Miguel Angel; Contreras, Francisco Martínez; Uruñuela, Carlos Orrite

    2010-01-01

    We focus on the automatic detection and classification of players in a football match. Our approach is not based on any a priori knowledge of the outfits, but on the assumption that the two main uniforms detected correspond to the two football teams. The algorithm is designed to be able to operate in real time, once it has been trained, and is able to detect partially occluded players and update the color of the kits to cope with some gradual illumination changes through time. Our method, evaluated from real sequences, gave better detection and classification results than those obtained by a system using a manual selection of samples to compute a Gaussian mixture model.

  20. Results of Flight Test of an Automatically Stabilized Model C (Swept Back) Four-Wing Tiamat

    NASA Technical Reports Server (NTRS)

    Seacord, Charles L., Jr.; Teitelbaum, J. M.

    1947-01-01

    The results of the first flight test of a swept-back four-wing version of Tiamat (MX-570 model C) which was launched at the NACA Pilotless Aircraft Research Station at W4110PB Island, Va. are presented. In general, the flight behavior was close to that predicted by calculations based an stability theory and oscillating table tests of the autopilot. The flight test thus indicates that the techniques employed to predict automatic stability are valid and practical from an operational viewpoint. The limitations of the method used to predict flight behavior arise from the fact that the calculations assume no coupling among roll, pitch, and yaw, while in actual flight some such coupling does exist.

  1. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  2. Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.

    PubMed

    Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J

    2010-01-01

    We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.

  3. Automatic coregistration of volumetric images based on implanted fiducial markers.

    PubMed

    Koch, Martin; Maltz, Jonathan S; Belongie, Serge J; Gangadharan, Bijumon; Bose, Supratik; Shukla, Himanshu; Bani-Hashemi, Ali R

    2008-10-01

    The accurate delivery of external beam radiation therapy is often facilitated through the implantation of radio-opaque fiducial markers (gold seeds). Before the delivery of each treatment fraction, seed positions can be determined via low dose volumetric imaging. By registering these seed locations with the corresponding locations in the previously acquired treatment planning computed tomographic (CT) scan, it is possible to adjust the patient position so that seed displacement is accommodated. The authors present an unsupervised automatic algorithm that identifies seeds in both planning and pretreatment images and subsequently determines a rigid geometric transformation between the two sets. The algorithm is applied to the imaging series of ten prostate cancer patients. Each test series is comprised of a single multislice planning CT and multiple megavoltage conebeam (MVCB) images. Each MVCB dataset is obtained immediately prior to a subsequent treatment session. Seed locations were determined to within 1 mm with an accuracy of 97 +/- 6.1% for datasets obtained by application of a mean imaging dose of 3.5 cGy per study. False positives occurred in three separate instances, but only when datasets were obtained at imaging doses too low to enable fiducial resolution by a human operator, or when the prostate gland had undergone large displacement or significant deformation. The registration procedure requires under nine seconds of computation time on a typical contemporary computer workstation.

  4. Automatic Trading Agent. RMT Based Portfolio Theory and Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Snarska, M.; Krzych, J.

    2006-11-01

    Portfolio theory is a very powerful tool in the modern investment theory. It is helpful in estimating risk of an investor's portfolio, arosen from lack of information, uncertainty and incomplete knowledge of reality, which forbids a perfect prediction of future price changes. Despite of many advantages this tool is not known and not widely used among investors on Warsaw Stock Exchange. The main reason for abandoning this method is a high level of complexity and immense calculations. The aim of this paper is to introduce an automatic decision-making system, which allows a single investor to use complex methods of Modern Portfolio Theory (MPT). The key tool in MPT is an analysis of an empirical covariance matrix. This matrix, obtained from historical data, biased by such a high amount of statistical uncertainty, that it can be seen as random. By bringing into practice the ideas of Random Matrix Theory (RMT), the noise is removed or significantly reduced, so the future risk and return are better estimated and controlled. These concepts are applied to the Warsaw Stock Exchange Simulator {http://gra.onet.pl}. The result of the simulation is 18% level of gains in comparison with respective 10% loss of the Warsaw Stock Exchange main index WIG.

  5. Automatic threshold optimization in nonlinear energy operator based spike detection.

    PubMed

    Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M

    2016-08-01

    In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.

  6. Automatic traffic real-time analysis system based on video

    NASA Astrophysics Data System (ADS)

    Ding, Liya; Liu, Jilin; Zhou, Qubo; Wang, Rengrong

    2003-05-01

    Automatic traffic analysis is very important in the modern world with heavy traffic. It can be achieved in numerous ways, among them, detection and analysis through video system, being able to provide affluent information and having little disturbance to the traffic, is an ideal choice. The proposed traffic vision analysis system uses Image Acquisition Card to capture real time images of the traffic scene through video camera, and then exploits the sequence of traffic scene and the image processing and analysis technique to detect the presence and movement of vehicles. First getting rid of the complex traffic background, which is always changing, the system segment each vehicle in the region the user interested. The system extracts features from each vehicle and tracks them through the image sequence. Combined with calibration, the system calculates information of the traffic, such as the speed of the vehicles, their types, the volume of flow, the traffic density, the waiting length of the lanes, the turning information of the vehicles, and so on. Traffic congestion and vehicles" shadows are disturbing problems of the vehicle detection, segmentation and tracking. So we make great effort to investigate on methods to dealing with them.

  7. Smart-card-based automatic meal record system intervention tool for analysis using data mining approach.

    PubMed

    Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro

    2010-04-01

    The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Research on automatic optimization of ground control points in image geometric rectification based on Voronoi diagram

    NASA Astrophysics Data System (ADS)

    Li, Ying; Cheng, Bo

    2009-10-01

    With the development of remote sensing satellites, the data quantity of remote sensing image is increasing tremendously, which brings a huge workload to the image geometric rectification through manual ground control point (GCP) selections. GCP database is one of the effective methods to cut down manual operation. The GCP loaded from database is generally redundant, which may result in a rectification slowdown. How to automatically optimize these ground control points is a problem that should be resolved urgently. According to the basic theory of geometric rectification and the principle of GCP selection, this paper deeply comprehends some existing methods about automatic optimization of GCP, and puts forward a new method of automatic optimization of GCP based on voronoi diagram to filter ground control points from the overfull ones without manual subjectivity for better accuracy. The paper is organized as follows: First, it clarifies the basic theory of remote sensing image multinomial geometric rectification and the arithmetic of how to get the GCP error. Second, it particularly introduces the voronoi diagram including its origin, development and characteristics, especially the creating process. Third, considering the deficiencies of existing methods about automatic optimization of GCP, the paper presents the idea of applying voronoi diagram to filter GCP in order to complete automatic optimization. During this process, it advances the conception of single GCP's importance value based on voronoi diagram. Then by integrating the GCP error and GCP's importance value, the paper gives the theory and the flow of automatic optimization of GCPs as well. It also presents an example of the application of this method. In the conclusion, it points out the advantages of automatic optimization of GCP based on the voronoi diagram.

  9. Automatic classification of sleep stages based on the time-frequency image of EEG signals.

    PubMed

    Bajaj, Varun; Pachori, Ram Bilas

    2013-12-01

    In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals.

  10. A Telesurveillance System With Automatic Electrocardiogram Interpretation Based on Support Vector Machine and Rule-Based Processing

    PubMed Central

    Lin, Ching-Miao; Lai, Feipei; Ho, Yi-Lwun; Hung, Chi-Sheng

    2015-01-01

    Background Telehealth care is a global trend affecting clinical practice around the world. To mitigate the workload of health professionals and provide ubiquitous health care, a comprehensive surveillance system with value-added services based on information technologies must be established. Objective We conducted this study to describe our proposed telesurveillance system designed for monitoring and classifying electrocardiogram (ECG) signals and to evaluate the performance of ECG classification. Methods We established a telesurveillance system with an automatic ECG interpretation mechanism. The system included: (1) automatic ECG signal transmission via telecommunication, (2) ECG signal processing, including noise elimination, peak estimation, and feature extraction, (3) automatic ECG interpretation based on the support vector machine (SVM) classifier and rule-based processing, and (4) display of ECG signals and their analyzed results. We analyzed 213,420 ECG signals that were diagnosed by cardiologists as the gold standard to verify the classification performance. Results In the clinical ECG database from the Telehealth Center of the National Taiwan University Hospital (NTUH), the experimental results showed that the ECG classifier yielded a specificity value of 96.66% for normal rhythm detection, a sensitivity value of 98.50% for disease recognition, and an accuracy value of 81.17% for noise detection. For the detection performance of specific diseases, the recognition model mainly generated sensitivity values of 92.70% for atrial fibrillation, 89.10% for pacemaker rhythm, 88.60% for atrial premature contraction, 72.98% for T-wave inversion, 62.21% for atrial flutter, and 62.57% for first-degree atrioventricular block. Conclusions Through connected telehealth care devices, the telesurveillance system, and the automatic ECG interpretation system, this mechanism was intentionally designed for continuous decision-making support and is reliable enough to reduce the

  11. Controlling Retrieval during Practice: Implications for Memory-Based Theories of Automaticity

    ERIC Educational Resources Information Center

    Wilkins, Nicolas J.; Rawson, Katherine A.

    2011-01-01

    Memory-based processing theories of automaticity assume that shifts from algorithmic to retrieval-based processing underlie practice effects on response times. The current work examined the extent to which individuals can exert control over the involvement of retrieval during skill acquisition and the factors that may influence control. In two…

  12. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  13. BioASF: a framework for automatically generating executable pathway models specified in BioPAX.

    PubMed

    Haydarlou, Reza; Jacobsen, Annika; Bonzanni, Nicola; Feenstra, K Anton; Abeln, Sanne; Heringa, Jaap

    2016-06-15

    Biological pathways play a key role in most cellular functions. To better understand these functions, diverse computational and cell biology researchers use biological pathway data for various analysis and modeling purposes. For specifying these biological pathways, a community of researchers has defined BioPAX and provided various tools for creating, validating and visualizing BioPAX models. However, a generic software framework for simulating BioPAX models is missing. Here, we attempt to fill this gap by introducing a generic simulation framework for BioPAX. The framework explicitly separates the execution model from the model structure as provided by BioPAX, with the advantage that the modelling process becomes more reproducible and intrinsically more modular; this ensures natural biological constraints are satisfied upon execution. The framework is based on the principles of discrete event systems and multi-agent systems, and is capable of automatically generating a hierarchical multi-agent system for a given BioPAX model. To demonstrate the applicability of the framework, we simulated two types of biological network models: a gene regulatory network modeling the haematopoietic stem cell regulators and a signal transduction network modeling the Wnt/β-catenin signaling pathway. We observed that the results of the simulations performed using our framework were entirely consistent with the simulation results reported by the researchers who developed the original models in a proprietary language. The framework, implemented in Java, is open source and its source code, documentation and tutorial are available at http://www.ibi.vu.nl/programs/BioASF CONTACT: j.heringa@vu.nl. © The Author 2016. Published by Oxford University Press.

  14. BioASF: a framework for automatically generating executable pathway models specified in BioPAX

    PubMed Central

    Haydarlou, Reza; Jacobsen, Annika; Bonzanni, Nicola; Feenstra, K. Anton; Abeln, Sanne; Heringa, Jaap

    2016-01-01

    Motivation: Biological pathways play a key role in most cellular functions. To better understand these functions, diverse computational and cell biology researchers use biological pathway data for various analysis and modeling purposes. For specifying these biological pathways, a community of researchers has defined BioPAX and provided various tools for creating, validating and visualizing BioPAX models. However, a generic software framework for simulating BioPAX models is missing. Here, we attempt to fill this gap by introducing a generic simulation framework for BioPAX. The framework explicitly separates the execution model from the model structure as provided by BioPAX, with the advantage that the modelling process becomes more reproducible and intrinsically more modular; this ensures natural biological constraints are satisfied upon execution. The framework is based on the principles of discrete event systems and multi-agent systems, and is capable of automatically generating a hierarchical multi-agent system for a given BioPAX model. Results: To demonstrate the applicability of the framework, we simulated two types of biological network models: a gene regulatory network modeling the haematopoietic stem cell regulators and a signal transduction network modeling the Wnt/β-catenin signaling pathway. We observed that the results of the simulations performed using our framework were entirely consistent with the simulation results reported by the researchers who developed the original models in a proprietary language. Availability and Implementation: The framework, implemented in Java, is open source and its source code, documentation and tutorial are available at http://www.ibi.vu.nl/programs/BioASF. Contact: j.heringa@vu.nl PMID:27307645

  15. Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model.

    PubMed

    Coden, Anni; Savova, Guergana; Sominsky, Igor; Tanenblatt, Michael; Masanz, James; Schuler, Karin; Cooper, James; Guan, Wei; de Groen, Piet C

    2009-10-01

    We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets.

  16. Three Modeling Applications to Promote Automatic Item Generation for Examinations in Dentistry.

    PubMed

    Lai, Hollis; Gierl, Mark J; Byrne, B Ellen; Spielman, Andrew I; Waldschmidt, David M

    2016-03-01

    Test items created for dentistry examinations are often individually written by content experts. This approach to item development is expensive because it requires the time and effort of many content experts but yields relatively few items. The aim of this study was to describe and illustrate how items can be generated using a systematic approach. Automatic item generation (AIG) is an alternative method that allows a small number of content experts to produce large numbers of items by integrating their domain expertise with computer technology. This article describes and illustrates how three modeling approaches to item content-item cloning, cognitive modeling, and image-anchored modeling-can be used to generate large numbers of multiple-choice test items for examinations in dentistry. Test items can be generated by combining the expertise of two content specialists with technology supported by AIG. A total of 5,467 new items were created during this study. From substitution of item content, to modeling appropriate responses based upon a cognitive model of correct responses, to generating items linked to specific graphical findings, AIG has the potential for meeting increasing demands for test items. Further, the methods described in this study can be generalized and applied to many other item types. Future research applications for AIG in dental education are discussed.

  17. Study on automatic airborne image positioning model and its application in FY-3A airborne experiment

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yang, Zhongdong; Guan, Min; Zhang, Liyang; Wang, Tiantian

    2009-08-01

    This paper addresses the issue on airborne image positioning model and its application in FY-3A experiment. First, the FY-3A Medium Resolution Spectral Imager (MERSI)'s viewing vector is derived from MERSI's imaging pattern. Then, the image positioning model is analyzed mathematically in detail which is based on Earth-aircraft geometry. The model parameters are mainly determined by both the sensor - aircraft alignment and the onboard discrete measurements of the positioning and orientation. Flight trials are flown at an altitude of 8300 m over the Qinghai Lake China. It is shown that the image positioning accuracy (about 1~4 pixels) is better than previous methods (more than 7 pixels, [G. J. Jedlovec et al. NASA Technical Memorandum TM - 100352 (1989) and D. P. Roy et al. Int. J. Rem. Sens. 18(9), 1865 - 1887 (1997)]). It is also shown that the model has the potential to hold the image positioning errors within one pixel. The model can operate automatically, and does not need ground control points data. Since our algorithm get the image positioning results through an observation geometric perspective which is in computing the point at which the sensor viewing vector intersects the earth surface, our algorithm assumes the airborne data are from the plain area.

  18. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  19. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  20. Automatic blood pressure monitors. Evaluation of three models in volunteers.

    PubMed

    Imbelloni, Luiz Eduardo; Beato, Lúcia; Tolentino, Ana Paula; de Souza, Dulcimar Donizete; Cordeiro, José Antônio

    2004-02-01

    Since 1903, blood pressure has been noninvasively monitored (NIBP), either with manual sphygmomanometer or automated noninvasive devices. One NIBP measurement problem is the considerable variance in blood pressure data, both within and between available techniques. The oscillometric method for NIBP monitoring evaluates blood pressure during cuff deflation. Difficulties in blood pressure measurement by oscillometry may arise from: inadequate cuff size, inadequate cuff application, undetected fails in cuff, hoses, or connectors, arm movement, shock and vascular compression proximal to the cuff. This study aimed at evaluating the reliability of three noninvasive blood pressure monitoring devices during five measurements. Blood pressure of 60 healthy female volunteers aged 20 to 40 years was evaluated from 7 am to 11 am, in the sitting position during a normal workday. Five measures were taken with each device at 2-minute intervals. Three automatic blood pressure monitors were studied. No patient was obese, hypertensive or suffering from cardiac disease and cardiac arrhythmia. Indirect measurements were made according to manufacturers' instructions. There were no differences in demographics among the three studied groups. Mean intrapersonal variation from one measurement to the other was up to 6.7 mmHg for systolic blood pressure (SBP), 4.9 mmHg for mean blood pressure (MBP) and 3.3 mmHg for diastolic blood pressure (DBP) with 95% confidence interval. The highest difference between measures in the same volunteer was 49 mmHg for SBP, 46 mmHg for MBP and 28 mmHg for DBP. This study has shown significant variations in SBP, MBP and DBP and that SBP is the most reliable parameter to check blood pressure changes in volunteers.

  1. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    PubMed

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following

  2. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    NASA Astrophysics Data System (ADS)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  3. Automatic vehicle detection based on automatic histogram-based fuzzy C-means algorithm and perceptual grouping using very high-resolution aerial imagery and road vector data

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Gökaşar, Ilgın

    2016-01-01

    This study presents an approach for the automatic detection of vehicles using very high-resolution images and road vector data. Initially, road vector data and aerial images are integrated to extract road regions. Then, the extracted road/street region is clustered using an automatic histogram-based fuzzy C-means algorithm, and edge pixels are detected using the Canny edge detector. In order to automatically detect vehicles, we developed a local perceptual grouping approach based on fusion of edge detection and clustering outputs. To provide the locality, an ellipse is generated using characteristics of the candidate clusters individually. Then, ratio of edge pixels to nonedge pixels in the corresponding ellipse is computed to distinguish the vehicles. Finally, a point-merging rule is conducted to merge the points that satisfy a predefined threshold and are supposed to denote the same vehicles. The experimental validation of the proposed method was carried out on six very high-resolution aerial images that illustrate two highways, two shadowed roads, a crowded narrow street, and a street in a dense urban area with crowded parked vehicles. The evaluation of the results shows that our proposed method performed 86% and 83% in overall correctness and completeness, respectively.

  4. Automatic classification of minimally invasive instruments based on endoscopic image sequences

    NASA Astrophysics Data System (ADS)

    Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2009-02-01

    Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.

  5. A VxD-based automatic blending system using multithreaded programming.

    PubMed

    Wang, L; Jiang, X; Chen, Y; Tan, K C

    2004-01-01

    This paper discusses the object-oriented software design for an automatic blending system. By combining the advantages of a programmable logic controller (PLC) and an industrial control PC (ICPC), an automatic blending control system is developed for a chemical plant. The system structure and multithread-based communication approach are first presented in this paper. The overall software design issues, such as system requirements and functionalities, are then discussed in detail. Furthermore, by replacing the conventional dynamic link library (DLL) with virtual X device drivers (VxD's), a practical and cost-effective solution is provided to improve the robustness of the Windows platform-based automatic blending system in small- and medium-sized plants.

  6. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    PubMed Central

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  7. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features.

    PubMed

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J; Shen, Dinggang

    2016-09-01

    The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. We propose a segmentation-guided partially-joint regression forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization method to extract high-level multiscale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2 mm. Our model has addressed challenges of both interpatient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency.

  8. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types.

  9. ADOPT: A tool for automatic detection of tectonic plates at the surface of convection models

    NASA Astrophysics Data System (ADS)

    Mallard, C.; Jacquet, B.; Coltice, N.

    2017-08-01

    Mantle convection models with plate-like behavior produce surface structures comparable to Earth's plate boundaries. However, analyzing those structures is a difficult task, since convection models produce, as on Earth, diffuse deformation and elusive plate boundaries. Therefore we present here and share a quantitative tool to identify plate boundaries and produce plate polygon layouts from results of numerical models of convection: Automatic Detection Of Plate Tectonics (ADOPT). This digital tool operates within the free open-source visualization software Paraview. It is based on image segmentation techniques to detect objects. The fundamental algorithm used in ADOPT is the watershed transform. We transform the output of convection models into a topographic map, the crest lines being the regions of deformation (plate boundaries) and the catchment basins being the plate interiors. We propose two generic protocols (the field and the distance methods) that we test against an independent visual detection of plate polygons. We show that ADOPT is effective to identify the smaller plates and to close plate polygons in areas where boundaries are diffuse or elusive. ADOPT allows the export of plate polygons in the standard OGR-GMT format for visualization, modification, and analysis under generic softwares like GMT or GPlates.

  10. Automatic modeling of pectus excavatum corrective prosthesis using artificial neural networks.

    PubMed

    Rodrigues, Pedro L; Rodrigues, Nuno F; Pinho, A C M; Fonseca, Jaime C; Correia-Pinto, Jorge; Vilaça, João L

    2014-10-01

    Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Automatic Topology Derivation from Ifc Building Model for In-Door Intelligent Navigation

    NASA Astrophysics Data System (ADS)

    Tang, S. J.; Zhu, Q.; Wang, W. W.; Zhang, Y. T.

    2015-05-01

    With the goal to achieve an accuracy navigation within the building environment, it is critical to explore a feasible way for building the connectivity relationships among 3D geographical features called in-building topology network. Traditional topology construction approaches for indoor space always based on 2D maps or pure geometry model, which remained information insufficient problem. Especially, an intelligent navigation for different applications depends mainly on the precise geometry and semantics of the navigation network. The trouble caused by existed topology construction approaches can be smoothed by employing IFC building model which contains detailed semantic and geometric information. In this paper, we present a method which combined a straight media axis transformation algorithm (S-MAT) with IFC building model to reconstruct indoor geometric topology network. This derived topology aimed at facilitating the decision making for different in-building navigation. In this work, we describe a multi-step deviation process including semantic cleaning, walkable features extraction, Multi-Storey 2D Mapping and S-MAT implementation to automatically generate topography information from existing indoor building model data given in IFC.

  12. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations

    PubMed Central

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-01-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  13. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  14. Automatic system for brain MRI analysis using a novel combination of fuzzy rule-based and automatic clustering techniques

    NASA Astrophysics Data System (ADS)

    Hillman, Gilbert R.; Chang, Chih-Wei; Ying, Hao; Kent, T. A.; Yen, John

    1995-05-01

    Analysis of magnetic resonance images (MRI) of the brain permits the identification and measurement of brain compartments. These compartments include normal subdivisions of brain tissue, such as gray matter, white matter and specific structures, and also include pathologic lesions associated with stroke or viral infection. A fuzzy system has been developed to analyze images of animal and human brain, segmenting the images into physiologically meaningful regions for display and measurement. This image segmentation system consists of two stages which include a fuzzy rule-based system and fuzzy c-means algorithm (FCM). The first stage of this system is a fuzzy rule-based system which classifies most pixels in MR images into several known classes and one `unclassified' group, which fails to fit the predetermined rules. In the second stage, this system uses the result of the first stage as initial estimates for the properties of the compartments and applies FCM to classify all the previously unclassified pixels. The initial prototypes are estimated by using the averages of the previously classified pixels. The combined processes constitute a fast, accurate and robust image segmentation system. This method can be applied to many clinical image segmentation problems. While the rule-based portion of the system allows specialized knowledge about the images to be incorporated, the FCM allows the resolution of ambiguities that result from noise and artifacts in the image data. The volumes and locations of the compartments can easily be measured and reported quantitatively once they are identified. It is easy to adapt this approach to new imaging problems, by introducing a new set of fuzzy rules and adjusting the number of expected compartments. However, for the purpose of building a practical fully automatic system, a rule learning mechanism may be necessary to improve the efficiency of modification of the fuzzy rules.

  15. Automatic generation of fuzzy rules for the sensor-based navigation of a mobile robot

    SciTech Connect

    Pin, F.G.; Watanabe, Y.

    1994-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called {open_quotes}Fuzzy Behaviorist,{close_quotes} and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed {open_quotes}Fuzzy Behaviorist{close_quotes} approach.

  16. Sensor-based navigation of a mobile robot using automatically constructed fuzzy rules

    SciTech Connect

    Watanabe, Y.; Pin, F.G.

    1993-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called ``Fuzzy Behaviorist,`` and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed ``Fuzzy Behaviorist`` approach.

  17. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  18. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  19. Showing Automatically Generated Students' Conceptual Models to Students and Teachers

    ERIC Educational Resources Information Center

    Perez-Marin, Diana; Pascual-Nieto, Ismael

    2010-01-01

    A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…

  20. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  1. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  2. Invariant wavelet transform-based automatic target recognition

    NASA Astrophysics Data System (ADS)

    Sadovnik, Lev S.; Rashkovskiy, Oleg; Tebelev, Igor

    1995-03-01

    The authors' previous work (SPIE Vol. 2237) on scale-, rotation- and shift-invariant wavelet transform is extended to accommodate multiple objects in the scene and a nonuniform background. After background elimination and segmentation, a set of windows each containing a single object are analyzed based on an invariant wavelet feature extraction algorithm and neural network-based classifier.

  3. FishCam - A semi-automatic video-based monitoring system of fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2016-04-01

    One of the main objectives of the Water Framework Directive is to preserve and restore the continuum of river networks. Regarding vertebrate migration, fish passes are widely used measure to overcome anthropogenic constructions. Functionality of this measure needs to be verified by monitoring. In this study we propose a newly developed monitoring system, named FishCam, to observe fish migration especially in fish passes without contact and without imposing stress on fish. To avoid time and cost consuming field work for fish pass monitoring, this project aims to develop a semi-automatic monitoring system that enables a continuous observation of fish migration. The system consists of a detection tunnel and a high resolution camera, which is mainly based on the technology of security cameras. If changes in the image, e.g. by migrating fish or drifting particles, are detected by a motion sensor, the camera system starts recording and continues until no further motion is detectable. An ongoing key challenge in this project is the development of robust software, which counts, measures and classifies the passing fish. To achieve this goal, many different computer vision tasks and classification steps have to be combined. Moving objects have to be detected and separated from the static part of the image, objects have to be tracked throughout the entire video and fish have to be separated from non-fish objects (e.g. foliage and woody debris, shadows and light reflections). Subsequently, the length of all detected fish needs to be determined and fish should be classified into species. The object classification in fish and non-fish objects is realized through ensembles of state-of-the-art classifiers on a single image per object. The choice of the best image for classification is implemented through a newly developed "fish benchmark" value. This value compares the actual shape of the object with a schematic model of side-specific fish. To enable an automatization of the

  4. Automatic Method of Supernovae Classification by Modeling Human Procedure of Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Módolo, Marcelo; Rosa, Reinaldo; Guimaraes, Lamartine N. F.

    2016-07-01

    The classification of a recently discovered supernova must be done as quickly as possible in order to define what information will be captured and analyzed in the following days. This classification is not trivial and only a few experts astronomers are able to perform it. This paper proposes an automatic method that models the human procedure of classification. It uses Multilayer Perceptron Neural Networks to analyze the supernovae spectra. Experiments were performed using different pre-processing and multiple neural network configurations to identify the classic types of supernovae. Significant results were obtained indicating the viability of using this method in places that have no specialist or that require an automatic analysis.

  5. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  6. Automatic visible and infrared face registration based on silhouette matching and robust transformation estimation

    NASA Astrophysics Data System (ADS)

    Tian, Tian; Mei, Xiaoguang; Yu, Yang; Zhang, Can; Zhang, Xiaoye

    2015-03-01

    Registration of multi-sensor data (particularly visible color sensors and infrared sensors) is a prerequisite for multimodal image analysis such as image fusion. In this paper, we proposed an automatic registration technique for visible and infrared face images based on silhouette matching and robust transformation estimation. The key idea is to represent a (visible or infrared) face image by its silhouette which is extracted from the image's edge map and consists of a set of discrete points, and then align the two silhouette point sets by using their feature similarity and spatial geometrical information. More precisely, our algorithm first matches the silhouette point sets by their local shape features such as shape context, which creates a set of putative correspondences that may contaminated by outliers. Next, we estimate the accurate transformation from the putative correspondence set under a robust maximum likelihood framework combining with the EM algorithm, where the transformation between the image pair is modeled by a parametric model such as the rigid or affine transformation. The qualitative and quantitative comparisons on a publicly available database demonstrate that our method significantly outperforms other state-of-the-art visible/infrared face registration methods. As a result, our method will be beneficial for fusion-based face recognition.

  7. A Zipfian Model of an Automatic Bibliographic System: An Application to MEDLINE.

    ERIC Educational Resources Information Center

    Fedorowicz, Jane

    1982-01-01

    Derives the underlying structure of the Zipf distribution, with emphasis on its application to word frequencies in the inverted files of automatic bibliographic systems, and applies the Zipfian model to the National Library of Medicine's MEDLINE database. An appendix on the Zipfian mean and 12 references are included. (Author/JL)

  8. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  9. Template-based automatic breast segmentation on MRI by excluding the chest region

    SciTech Connect

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  10. Template-based automatic breast segmentation on MRI by excluding the chest region

    SciTech Connect

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  11. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  12. [Automatic classification method of star spectrum data based on constrained concept lattice].

    PubMed

    Zhang, Ji-Fu; Ma, Yang

    2010-02-01

    Concept lattice is an effective formal tool for data analysis and knowledge extraction. Constrained concept lattice, with the characteristics of higher constructing efficiency, practicability and pertinency, is a new concept lattice structure. For the automatic classification task of star spectrum, a classification rule mining method based on constrained concept lattice is presented by using the concepts of partition and extant supports. In the end, the experimental results validate the higher classification efficiency and correctness of the method by taking the star spectrum data as the formal context, so that an effective way is provided for the automatic classification of massive star spectrum.

  13. An automatic multi-lead electrocardiogram segmentation algorithm based on abrupt change detection.

    PubMed

    Illanes-Manriquez, Alfredo

    2010-01-01

    Automatic detection of electrocardiogram (ECG) waves provides important information for cardiac disease diagnosis. In this paper a new algorithm is proposed for automatic ECG segmentation based on multi-lead ECG processing. Two auxiliary signals are computed from the first and second derivatives of several ECG leads signals. One auxiliary signal is used for R peak detection and the other for ECG waves delimitation. A statistical hypothesis testing is finally applied to one of the auxiliary signals in order to detect abrupt mean changes. Preliminary experimental results show that the detected mean changes instants coincide with the boundaries of the ECG waves.

  14. Automatic calibration of space based manipulators and mechanisms

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1988-01-01

    Four tasks in manipulator kinematic calibration are summarized. Calibration of a seven degree of freedom manipulator was simulated. A calibration model is presented that can be applied on a closed-loop robot. It is an expansion of open-loop kinematic calibration algorithms subject to constraints. A closed-loop robot with a five-bar linkage transmission was tested. Results show that the algorithm converges within a few iterations. The concept of model differences is formalized. Differences are categorized as structural and numerical, with emphasis on the structural. The work demonstrates that geometric manipulators can be visualized as points in a vector space with the dimension of the space depending solely on the number and type of manipulator joint. Visualizing parameters in a kinematic model as the coordinates locating the manipulator in vector space enables a standard evaluation of the models. Key results include a derivation of the maximum number of parameters necessary for models, a formal discussion on the inclusion of extra parameters, and a method to predetermine a minimum model structure for a kinematic manipulator. A technique is presented that enables single point sensors to gather sufficient information to complete a calibration.

  15. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  16. Automatic background updating for video-based vehicle detection

    NASA Astrophysics Data System (ADS)

    Hu, Chunhai; Li, Dongmei; Liu, Jichuan

    2008-03-01

    Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.

  17. Rheticus: an automatic cloud-based geo-information service platform for territorial monitoring

    NASA Astrophysics Data System (ADS)

    Samarelli, Sergio; Lorusso, Antonio Pio; Agrimano, Luigi; Nutricato, Raffaele; Bovenga, Fabio; Nitti, Davide Oscar; Chiaradia, Maria Teresa

    2016-10-01

    Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and, if appropriate, a minimum interaction with human operators. In this paper, we outlines the capabilities of the "Rheticus® Displacement" service, designed for geohazard and infrastructure monitoring through Multi-Temporal SAR Interferometry techniques.

  18. Evaluating Automatic Speech Recognition-Based Language Learning Systems: A Case Study

    ERIC Educational Resources Information Center

    van Doremalen, Joost; Boves, Lou; Colpaert, Jozef; Cucchiarini, Catia; Strik, Helmer

    2016-01-01

    The purpose of this research was to evaluate a prototype of an automatic speech recognition (ASR)-based language learning system that provides feedback on different aspects of speaking performance (pronunciation, morphology and syntax) to students of Dutch as a second language. We carried out usability reviews, expert reviews and user tests to…

  19. An Automatic Document Indexing System Based on Cooperating Expert Systems: Design and Development.

    ERIC Educational Resources Information Center

    Schuegraf, Ernst J.; van Bommel, Martin F.

    1993-01-01

    Describes the design of an automatic indexing system that is based on statistical techniques and expert system technology. Highlights include system architecture; the derivation of topic indicators, including word frequency; experimental results using documents from ERIC; the effects of stemming; and the identification of characteristic…

  20. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  1. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  2. Evaluating Automatic Speech Recognition-Based Language Learning Systems: A Case Study

    ERIC Educational Resources Information Center

    van Doremalen, Joost; Boves, Lou; Colpaert, Jozef; Cucchiarini, Catia; Strik, Helmer

    2016-01-01

    The purpose of this research was to evaluate a prototype of an automatic speech recognition (ASR)-based language learning system that provides feedback on different aspects of speaking performance (pronunciation, morphology and syntax) to students of Dutch as a second language. We carried out usability reviews, expert reviews and user tests to…

  3. Automatic hearing loss detection system based on auditory brainstem response

    NASA Astrophysics Data System (ADS)

    Aldonate, J.; Mercuri, C.; Reta, J.; Biurrun, J.; Bonell, C.; Gentiletti, G.; Escobar, S.; Acevedo, R.

    2007-11-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.

  4. A learning-based, fully automatic liver tumor segmentation pipeline based on sparsely annotated training data

    NASA Astrophysics Data System (ADS)

    Goetz, Michael; Heim, Eric; Maerz, Keno; Norajitra, Tobias; Hafezi, Mohammadreza; Fard, Nassim; Mehrabi, Arianeb; Knoll, Max; Weber, Christian; Maier-Hein, Lena; Maier-Hein, Klaus H.

    2016-03-01

    Current fully automatic liver tumor segmentation systems are designed to work on a single CT-image. This hinders these systems from the detection of more complex types of liver tumor. We therefore present a new algorithm for liver tumor segmentation that allows incorporating different CT scans and requires no manual interaction. We derive a liver segmentation with state-of-the-art shape models which are robust to initialization. The tumor segmentation is then achieved by classifying all voxels into healthy or tumorous tissue using Extremely Randomized Trees with an auto-context learning scheme. Using DALSA enables us to learn from only sparse annotations and allows a fast set-up for new image settings. We validate the quality of our algorithm with exemplary segmentation results.

  5. Applications of Neural Network Models in Automatic Speech Recognition.

    DTIC Science & Technology

    1986-09-29

    computing elements that follow the principles of physiological neurons, called neural network models, have been shown to have the capability of learning...to recognize patterns and to retrieve complete patterns from partial representations. The implementation of neural network models as VLSI or ULSI chips...tion of millions, if not billions, of these computing ele- ments. Until recently, this was a practical impossibility. But great advances in VLSI

  6. Toward Automatic Rhodopsin Modeling as a Tool for High-Throughput Computational Photobiology.

    PubMed

    Melaccio, Federico; Del Carmen Marín, María; Valentini, Alessio; Montisci, Fabio; Rinaldi, Silvia; Cherubini, Marco; Yang, Xuchun; Kato, Yoshitaka; Stenrup, Michael; Orozco-Gonzalez, Yoelvis; Ferré, Nicolas; Luk, Hoi Ling; Kandori, Hideki; Olivucci, Massimo

    2016-12-13

    We report on a prototype protocol for the automatic and fast construction of congruous sets of QM/MM models of rhodopsin-like photoreceptors and of their mutants. In the present implementation the information required for the construction of each model is essentially a crystallographic structure or a comparative model complemented with information on the protonation state of ionizable side chains and distributions of external counterions. Starting with such information, a model formed by a fixed environment system, a flexible cavity system, and a chromophore system is automatically generated. The results of the predicted vertical excitation energy for 27 different rhodopsins including vertebrate, invertebrate, and microbial pigments indicate that such basic models could be employed for predicting trends in spectral changes and/or correlate the spectral changes with structural variations in large sets of proteins.

  7. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  8. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    NASA Astrophysics Data System (ADS)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily

  9. Engineering model of the electric drives of separation device for simulation of automatic control systems of reactive power compensation by means of serially connected capacitors

    NASA Astrophysics Data System (ADS)

    Juromskiy, V. M.

    2016-09-01

    It is developed a mathematical model for an electric drive of high-speed separation device in terms of the modeling dynamic systems Simulink, MATLAB. The model is focused on the study of the automatic control systems of the power factor (Cosφ) of an actuator by compensating the reactive component of the total power by switching a capacitor bank in series with the actuator. The model is based on the methodology of the structural modeling of dynamic processes.

  10. Automatic Calibration Method for a Storm Water Runoff Model

    NASA Astrophysics Data System (ADS)

    Barco, J.; Wong, K. M.; Hogue, T.; Stenstrom, M. K.

    2007-12-01

    Major metropolitan areas are characterized by continuous increases in imperviousness due to urban development. Increasing imperviousness increases runoff volume and maximum rates of runoff, with generally negative consequences for natural systems. To avoid environmental degradation, new development standards often prohibit increases in total runoff volume and may limit maximum flow rates. Methods to reduce runoff volume and maximum runoff rate are required, and solutions to the problems may benefit from the use of advanced models. In this study the U.S. Storm Water Management Model (SWMM) was adapted and calibrated to the Ballona Creek watershed, a large urban catchment in Southern California. A geographic information system (GIS) was used to process the input data and generate the spatial distribution of precipitation. An optimization procedure using the Complex Method was incorporated to estimate runoff parameters, and ten storms were used for calibration and validation. The calibrated model predicted the observed outputs with reasonable accuracy. A sensitivity analysis showed the impact of the model parameters, and results were most sensitive to imperviousness and impervious depression storage and least sensitive to Manning roughness for surface flow. Optimized imperviousness was greater than imperviousness predicted from landuse information. The results demonstrate that this methodology of integrating GIS and stormwater model with a constrained optimization technique can be applied to large watersheds, and can be a useful tool to evaluate alternative strategies to reduce runoff rate and volume.

  11. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  12. Limited persistence models for SAR automatic target recognition

    NASA Astrophysics Data System (ADS)

    Sugavanam, Nithin; Ertin, Emre

    2017-04-01

    We consider the task of estimating the scattering coefficients and locations of the scattering centers that exhibit limited azimuthal persistence for a wide-angle synthetic aperture radar (SAR) sensor operating in spotlight mode. We exploit the sparsity of the scattering centers in the spatial domain as well as the slow-varying structure of the scattering coefficients in the azimuth domain to solve the ill-posed linear inverse problem. Furthermore, we utilize this recovered model as a template for the task of target recognition and pose estimation. We also investigate the effects of missing pulses in the initial recovery step of the model on the performance of the proposed method for target recognition. We empirically establish that the recovered model can be used to estimate the target class and pose simultaneously for the case of missing measurements.

  13. The Acoustic-Modeling Problem in Automatic Speech Recognition.

    DTIC Science & Technology

    1987-12-01

    systems that use an artificial grammar do so in order to set this uncertainty by fiat, thereby ensuring that their task, will not be too difficult...an artificial grammar , the Pr (W = w)’s are known and Hm (W) can, in fact, achieve its lower bound if the system simply uses these probabilities. In a...finite-state grammar represented by that chain. As Jim Baker points out, the modeling of speech by a hidden Markov model should not be regarded as a

  14. An automatic damage detection algorithm based on the Short Time Impulse Response Function

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Carlo Ponzo, Felice; Ditommaso, Rocco; Iacovino, Chiara

    2016-04-01

    also during the strong motion phase. This approach helps to overcome the limitation derived from the use of techniques based on simple Fourier Transform that provide good results when the response of the monitored system is stationary, but fails when the system exhibits a non-stationary behaviour. The main advantage derived from the use of the proposed approach for Structural Health Monitoring is based on the simplicity of the interpretation of the nonlinear variations of the fundamental frequency. The proposed methodology has been tested on numerical models of reinforced concrete structures designed for only gravity loads without and with the presence of infill panels. In order to verify the effectiveness of the proposed approach for the automatic evaluation of the fundamental frequency over time, the results of an experimental campaign of shaking table tests conducted at the seismic laboratory of University of Basilicata (SISLAB) have been used. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''. References Ditommaso, R., Mucciarelli, M., Ponzo, F.C. (2012) Analysis of non-stationary structural systems by using a band-variable filter. Bulletin of Earthquake Engineering. DOI: 10.1007/s10518-012-9338-y.

  15. The research of automatic speed control algorithm based on Green CBTC

    NASA Astrophysics Data System (ADS)

    Lin, Ying; Xiong, Hui; Wang, Xiaoliang; Wu, Youyou; Zhang, Chuanqi

    2017-06-01

    Automatic speed control algorithm is one of the core technologies of train operation control system. It’s a typical multi-objective optimization control algorithm, which achieve the train speed control for timing, comfort, energy-saving and precise parking. At present, the train speed automatic control technology is widely used in metro and inter-city railways. It has been found that the automatic speed control technology can effectively reduce the driver’s intensity, and improve the operation quality. However, the current used algorithm is poor at energy-saving, even not as good as manual driving. In order to solve the problem of energy-saving, this paper proposes an automatic speed control algorithm based on Green CBTC system. Based on the Green CBTC system, the algorithm can adjust the operation status of the train to improve the efficient using rate of regenerative braking feedback energy while ensuring the timing, comfort and precise parking targets. Due to the reason, the energy-using of Green CBTC system is lower than traditional CBTC system. The simulation results show that the algorithm based on Green CBTC system can effectively reduce the energy-using due to the improvement of the using rate of regenerative braking feedback energy.

  16. Automatic age and gender classification using supervised appearance model

    NASA Astrophysics Data System (ADS)

    Bukar, Ali Maina; Ugail, Hassan; Connah, David

    2016-11-01

    Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.

  17. Content-based analysis of Ki-67 stained meningioma specimens for automatic hot-spot selection.

    PubMed

    Swiderska-Chadaj, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Lorent, Malgorzata

    2016-10-07

    Hot-spot based examination of immunohistochemically stained histological specimens is one of the most important procedures in pathomorphological practice. The development of image acquisition equipment and computational units allows for the automation of this process. Moreover, a lot of possible technical problems occur in everyday histological material, which increases the complexity of the problem. Thus, a full context-based analysis of histological specimens is also needed in the quantification of immunohistochemically stained specimens. One of the most important reactions is the Ki-67 proliferation marker in meningiomas, the most frequent intracranial tumour. The aim of our study is to propose a context-based analysis of Ki-67 stained specimens of meningiomas for automatic selection of hot-spots. The proposed solution is based on textural analysis, mathematical morphology, feature ranking and classification, as well as on the proposed hot-spot gradual extinction algorithm to allow for the proper detection of a set of hot-spot fields. The designed whole slide image processing scheme eliminates such artifacts as hemorrhages, folds or stained vessels from the region of interest. To validate automatic results, a set of 104 meningioma specimens were selected and twenty hot-spots inside them were identified independently by two experts. The Spearman rho correlation coefficient was used to compare the results which were also analyzed with the help of a Bland-Altman plot. The results show that most of the cases (84) were automatically examined properly with two fields of view with a technical problem at the very most. Next, 13 had three such fields, and only seven specimens did not meet the requirement for the automatic examination. Generally, the Automatic System identifies hot-spot areas, especially their maximum points, better. Analysis of the results confirms the very high concordance between an automatic Ki-67 examination and the expert's results, with a Spearman

  18. Automatic Adjustment of Wide-Base Google Street View Panoramas

    NASA Astrophysics Data System (ADS)

    Boussias-Alexakis, E.; Tsironisa, V.; Petsa, E.; Karras, G.

    2016-06-01

    This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  19. A new approach for automatic sleep scoring: Combining Taguchi based complex-valued neural network and complex wavelet transform.

    PubMed

    Peker, Musa

    2016-06-01

    Automatic classification of sleep stages is one of the most important methods used for diagnostic procedures in psychiatry and neurology. This method, which has been developed by sleep specialists, is a time-consuming and difficult process. Generally, electroencephalogram (EEG) signals are used in sleep scoring. In this study, a new complex classifier-based approach is presented for automatic sleep scoring using EEG signals. In this context, complex-valued methods were utilized in the feature selection and classification stages. In the feature selection stage, features of EEG data were extracted with the help of a dual tree complex wavelet transform (DTCWT). In the next phase, five statistical features were obtained. These features are classified using complex-valued neural network (CVANN) algorithm. The Taguchi method was used in order to determine the effective parameter values in this CVANN. The aim was to develop a stable model involving parameter optimization. Different statistical parameters were utilized in the evaluation phase. Also, results were obtained in terms of two different sleep standards. In the study in which a 2nd level DTCWT and CVANN hybrid model was used, 93.84% accuracy rate was obtained according to the Rechtschaffen & Kales (R&K) standard, while a 95.42% accuracy rate was obtained according to the American Academy of Sleep Medicine (AASM) standard. Complex-valued classifiers were found to be promising in terms of the automatic sleep scoring and EEG data.

  20. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  1. Automatic evidence quality prediction to support evidence-based decision making.

    PubMed

    Sarker, Abeed; Mollá, Diego; Paris, Cécile

    2015-06-01

    Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance

  2. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  3. Penalty function model on the automatic testing of form errors

    NASA Astrophysics Data System (ADS)

    Xu, Xue-lin

    1993-09-01

    In this paper ,a principle and a way of searching form errors by using a penalty function model in dynamic measureement is introduced. Roundness and straightenss errors measurement for typical example are taken in paper ,it is grounded on relevant definitions of the Chinese National Standard(GB1 183 -80)to set up target function mm (x ,y) = Rmax (x , y)-Rmin (x ,y) , and to establish feasible collective "S" of desing the tolerance for variable x ,y . Namely (x ,y) E S is restraint condition. Is grounded on as started above,the model of punitive function and the punitive factors have been established. Then , an optimigation search of measurement process is undertaken. Then the points out of feasible collective can be rejected .Last the value form error is got. An example according to the theory can be run in a computer.

  4. Development of automatic target recognition for infrared sensor-based close-range land mine detector

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Garcia, Sigberto A.; Cloud, Eugene L.; Duvoisin, Herbert A., III; Long, Daniel T.; Hackett, Jay K.

    1995-06-01

    Infrared imagery scenes change continuously with environmental conditions. Strategic targets embedded in them are often difficult to be identified with the naked eye. An IR sensor-based mine detector must include Automatic Target Recognition (ATR) to detect and extract land mines from IR scenes. In the course of the ATR development process, mine signature data were collected using a commercial 8-12 (mu) spectral range FLIR, model Inframetrics 445L, and a commercial 3-5 (mu) starting focal planar array FLIR, model Infracam. These sensors were customized to the required field-of-view for short range operation. These baseline data were then input into a specialized parallel processor on which the mine detection algorithm is developed and trained. The ATR is feature-based and consists of several subprocesses to progress from raw input IR imagery to a neural network classifier for final nomination of the targets. Initially, image enhancement is used to remove noise and sensor artifact. Three preprocessing techniques, namely model-based segmentation, multi-element prescreener, and geon detector are then applied to extract specific features of the targets and to reject all objects that do not resemble mines. Finally, to further reduce the false alarm rate, the extracted features are presented to the neural network classifier. Depending on the operational circumstances, one of three neural network techniques will be adopted; back propagation, supervised real-time learning, or unsupervised real-time learning. The Close Range IR Mine Detection System is an Army program currently being experimentally developed to be demonstrated in the Army's Advanced Technology Demonstration in FY95. The ATR resulting from this program will be integrated in the 21st Century Land Warrior program in which the mine avoidance capability is its primary interest.

  5. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %.

  6. Automatic depth determination for sculpting based on volume rendering

    NASA Astrophysics Data System (ADS)

    Yi, Jaeyoun; Ra, Jong Beom

    2004-05-01

    An interactive sculpting tool is being widely used to segment a 3-D object on a volume rendered image for improving the intuitiveness. However, it is very hard to segment only an outer part of a 3-D object, since the conventional method cannot handle the depth of removal. In this paper, we present an effective method to determine the depth of removal, by using the proposed spring-rod model and the voxel-opacity. To determine the depth of removal, the 2-D array of rigid rods is constructed after a 2-D closed loop is defined on a volume-rendered image by a user. Each rigid rod is located at a digitized position inside the user-drawn closed loop and its direction is coincident with that of projecting rays. And every rod has a frictionless ball, which is interconnected with its neighboring balls through ideal springs. In addition, we assume that an external force defined by the corresponding voxel-opacity value is exerted on each ball along the direction of the projected ray. Using this spring-rod system model, we can determine final positions of balls, which represent the depths of removal. Then, the outer part can be properly removed. The proposed method is applied to various medical image data and is evaluated to provide robust results with easy user-interaction.

  7. Automatic detection of volcano-seismic events by modeling state and event duration in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Bhatti, Sohail Masood; Khan, Muhammad Salman; Wuth, Jorge; Huenupan, Fernando; Curilem, Millaray; Franco, Luis; Yoma, Nestor Becerra

    2016-09-01

    In this paper we propose an automatic volcano event detection system based on Hidden Markov Model (HMM) with state and event duration models. Since different volcanic events have different durations, therefore the state and whole event durations learnt from the training data are enforced on the corresponding state and event duration models within the HMM. Seismic signals from the Llaima volcano are used to train the system. Two types of events are employed in this study, Long Period (LP) and Volcano-Tectonic (VT). Experiments show that the standard HMMs can detect the volcano events with high accuracy but generates false positives. The results presented in this paper show that the incorporation of duration modeling can lead to reductions in false positive rate in event detection as high as 31% with a true positive accuracy equal to 94%. Further evaluation of the false positives indicate that the false alarms generated by the system were mostly potential events based on the signal-to-noise ratio criteria recommended by a volcano expert.

  8. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  9. Action levels for automatic gamma-measurements based on probabilistic radionuclide transport calculations.

    PubMed

    Lauritzen, Bent; Hedemann-Jensen, Per

    2005-12-01

    In the event of a nuclear or radiological emergency resulting in an atmospheric release of radioactive materials, stationary gamma-measurements, for example obtained from distributed, automatic monitoring stations, may provide a first assessment of exposures resulting from airborne and deposited activity. Decisions on the introduction of countermeasures for the protection of the public can be based on such off-site gamma measurements. A methodology is presented for calculation of gamma-radiation action levels for the introduction of specific countermeasures, based on probabilistic modelling of the dispersion of radionuclides and the radiation exposure. The methodology is applied to a nuclear accident situation with long-range atmospheric dispersion of radionuclides, and action levels of dose rate measured by a network of monitoring stations are estimated for sheltering and foodstuff restrictions. It is concluded that the methodology is applicable to all emergency countermeasures following a nuclear accident but measurable quantities other than ambient dose equivalent rate are needed for decisions on the introduction of foodstuff countermeasures.

  10. Automatic construction of rule-based ICD-9-CM coding systems

    PubMed Central

    Farkas, Richárd; Szarvas, György

    2008-01-01

    Background In this paper we focus on the problem of automatically constructing ICD-9-CM coding systems for radiology reports. ICD-9-CM codes are used for billing purposes by health institutes and are assigned to clinical records manually following clinical treatment. Since this labeling task requires expert knowledge in the field of medicine, the process itself is costly and is prone to errors as human annotators have to consider thousands of possible codes when assigning the right ICD-9-CM labels to a document. In this study we use the datasets made available for training and testing automated ICD-9-CM coding systems by the organisers of an International Challenge on Classifying Clinical Free Text Using Natural Language Processing in spring 2007. The challenge itself was dominated by entirely or partly rule-based systems that solve the coding task using a set of hand crafted expert rules. Since the feasibility of the construction of such systems for thousands of ICD codes is indeed questionable, we decided to examine the problem of automatically constructing similar rule sets that turned out to achieve a remarkable accuracy in the shared task challenge. Results Our results are very promising in the sense that we managed to achieve comparable results with purely hand-crafted ICD-9-CM classifiers. Our best model got a 90.26% F measure on the training dataset and an 88.93% F measure on the challenge test dataset, using the micro-averaged Fβ=1 measure, the official evaluation metric of the International Challenge on Classifying Clinical Free Text Using Natural Language Processing. This result would have placed second in the challenge, with a hand-crafted system achieving slightly better results. Conclusions Our results demonstrate that hand-crafted systems – which proved to be successful in ICD-9-CM coding – can be reproduced by replacing several laborious steps in their construction with machine learning models. These hybrid systems preserve the favourable

  11. A multi-algorithm-based automatic person identification system

    NASA Astrophysics Data System (ADS)

    Monwar, Md. Maruf; Gavrilova, Marina

    2010-04-01

    Multimodal biometric is an emerging area of research that aims at increasing the reliability of biometric systems through utilizing more than one biometric in decision-making process. In this work, we develop a multi-algorithm based multimodal biometric system utilizing face and ear features and rank and decision fusion approach. We use multilayer perceptron network and fisherimage approaches for individual face and ear recognition. After face and ear recognition, we integrate the results of the two face matchers using rank level fusion approach. We experiment with highest rank method, Borda count method, logistic regression method and Markov chain method of rank level fusion approach. Due to the better recognition performance we employ Markov chain approach to combine face decisions. Similarly, we get combined ear decision. These two decisions are combined for final identification decision. We try with 'AND'/'OR' rule, majority voting rule and weighted majority voting rule of decision fusion approach. From the experiment results, we observed that weighted majority voting rule works better than any other decision fusion approaches and hence, we incorporate this fusion approach for the final identification decision. The final results indicate that using multi algorithm based can certainly improve the recognition performance of multibiometric systems.

  12. Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, ACM; Fonseca, Jaime C.; Correia-Pinto, Jorge; Vilaça, João. L.

    2014-03-01

    Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82+/-5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7+/-4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

  13. Automatic generation of computable implementation guides from clinical information models.

    PubMed

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes.

  14. Uav Visual Autolocalizaton Based on Automatic Landmark Recognition

    NASA Astrophysics Data System (ADS)

    Silva Filho, P.; Shiguemori, E. H.; Saotome, O.

    2017-08-01

    Deploying an autonomous unmanned aerial vehicle in GPS-denied areas is a highly discussed problem in the scientific community. There are several approaches being developed, but the main strategies yet considered are computer vision based navigation systems. This work presents a new real-time computer-vision position estimator for UAV navigation. The estimator uses images captured during flight to recognize specific, well-known, landmarks in order to estimate the latitude and longitude of the aircraft. The method was tested in a simulated environment, using a dataset of real aerial images obtained in previous flights, with synchronized images, GPS and IMU data. The estimated position in each landmark recognition was compatible with the GPS data, stating that the developed method can be used as an alternative navigation system.

  15. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser

  16. Automatic generation of water distribution systems based on GIS data.

    PubMed

    Sitzenfrei, Robert; Möderl, Michael; Rauch, Wolfgang

    2013-09-01

    In the field of water distribution system (WDS) analysis, case study research is needed for testing or benchmarking optimisation strategies and newly developed software. However, data availability for the investigation of real cases is limited due to time and cost needed for data collection and model setup. We present a new algorithm that addresses this problem by generating WDSs from GIS using population density, housing density and elevation as input data. We show that the resulting WDSs are comparable to actual systems in terms of network properties and hydraulic performance. For example, comparing the pressure heads for an actual and a generated WDS results in pressure head differences of ±4 m or less for 75% of the supply area. Although elements like valves and pumps are not included, the new methodology can provide water distribution systems of varying levels of complexity (e.g., network layouts, connectivity, etc.) to allow testing design/optimisation algorithms on a large number of networks. The new approach can be used to estimate the construction costs of planned WDSs aimed at addressing population growth or at comparisons of different expansion strategies in growth corridors.

  17. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  18. Effective key parameter determination for an automatic approach to land cover classification based on multispectral remote sensing imagery.

    PubMed

    Wang, Yong; Jiang, Dong; Zhuang, Dafang; Huang, Yaohuan; Wang, Wei; Yu, Xinfang

    2013-01-01

    The classification of land cover based on satellite data is important for many areas of scientific research. Unfortunately, some traditional land cover classification methods (e.g. known as supervised classification) are very labor-intensive and subjective because of the required human involvement. Jiang et al. proposed a simple but robust method for land cover classification using a prior classification map and a current multispectral remote sensing image. This new method has proven to be a suitable classification method; however, its drawback is that it is a semi-automatic method because the key parameters cannot be selected automatically. In this study, we propose an approach in which the two key parameters are chosen automatically. The proposed method consists primarily of the following three interdependent parts: the selection procedure for the pure-pixel training-sample dataset, the method to determine the key parameters, and the optimal combination model. In this study, the proposed approach employs both overall accuracy and their Kappa Coefficients (KC), and Time-Consumings (TC, unit: second) in order to select the two key parameters automatically instead of using a test-decision, which avoids subjective bias. A case study of Weichang District of Hebei Province, China, using Landsat-5/TM data of 2010 with 30 m spatial resolution and prior classification map of 2005 recognised as relatively precise data, was conducted to test the performance of this method. The experimental results show that the methodology determining the key parameters uses the portfolio optimisation model and increases the degree of automation of Jiang et al.'s classification method, which may have a wide scope of scientific application.

  19. Effective Key Parameter Determination for an Automatic Approach to Land Cover Classification Based on Multispectral Remote Sensing Imagery

    PubMed Central

    Wang, Yong; Jiang, Dong; Zhuang, Dafang; Huang, Yaohuan; Wang, Wei; Yu, Xinfang

    2013-01-01

    The classification of land cover based on satellite data is important for many areas of scientific research. Unfortunately, some traditional land cover classification methods (e.g. known as supervised classification) are very labor-intensive and subjective because of the required human involvement. Jiang et al. proposed a simple but robust method for land cover classification using a prior classification map and a current multispectral remote sensing image. This new method has proven to be a suitable classification method; however, its drawback is that it is a semi-automatic method because the key parameters cannot be selected automatically. In this study, we propose an approach in which the two key parameters are chosen automatically. The proposed method consists primarily of the following three interdependent parts: the selection procedure for the pure-pixel training-sample dataset, the method to determine the key parameters, and the optimal combination model. In this study, the proposed approach employs both overall accuracy and their Kappa Coefficients (KC), and Time-Consumings (TC, unit: second) in order to select the two key parameters automatically instead of using a test-decision, which avoids subjective bias. A case study of Weichang District of Hebei Province, China, using Landsat-5/TM data of 2010 with 30 m spatial resolution and prior classification map of 2005 recognised as relatively precise data, was conducted to test the performance of this method. The experimental results show that the methodology determining the key parameters uses the portfolio optimisation model and increases the degree of automation of Jiang et al.'s classification method, which may have a wide scope of scientific application. PMID:24204582

  20. Towards an Automatic and Application-Based EigensolverSelection

    SciTech Connect

    Zhang, Yeliang; Li, Xiaoye S.; Marques, Osni

    2005-09-09

    The computation of eigenvalues and eigenvectors is an important and often time-consuming phase in computer simulations. Recent efforts in the development of eigensolver libraries have given users good algorithms without the need for users to spend much time in programming. Yet, given the variety of numerical algorithms that are available to domain scientists, choosing the ''best'' algorithm suited for a particular application is a daunting task. As simulations become increasingly sophisticated and larger, it becomes infeasible for a user to try out every reasonable algorithm configuration in a timely fashion. Therefore, there is a need for an intelligent engine that can guide the user through the maze of various solvers with various configurations. In this paper, we present a methodology and a software architecture aiming at determining the best solver based on the application type and the matrix properties. We combine a decision tree and an intelligent engine to select a solver and a preconditioner combination for the application submitted by the user. We also discuss how our system interface is implemented with third party numerical libraries. In the case study, we demonstrate the feasibility and usefulness of our system with a simplified linear solving system. Our experiments show that our proposed intelligent engine is quite adept in choosing a suitable algorithm for different applications.

  1. Automatic exudate detection using active contour model and regionwise classification.

    PubMed

    Harangi, B; Lazar, I; Hajdu, A

    2012-01-01

    Diabetic retinopathy is one the most common cause of blindness in the world. Exudates are among the early signs of this disease, so its proper detection is a very important task to prevent consequent effects. In this paper, we propose a novel approach for exudate detection. First, we identify possible regions containing exudates using grayscale morphology. Then, we apply an active contour based method to minimize the Chan-Vese energy to extract accurate borders of the candidates. To remove those false candidates that have sufficient strong borders to pass the active contour method we use a regionwise classifier. Hence, we extract several shape features for each candidate and let a boosted Naïve Bayes classifier eliminate the false candidates. We considered the publicly available DiaretDB1 color fundus image set for testing, where the proposed method outperformed several state-of-the-art exudate detectors.

  2. Automatic kernel regression modelling using combined leave-one-out test score and regularised orthogonal least squares.

    PubMed

    Hong, X; Chen, S; Sharkey, P M

    2004-02-01

    This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out test score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised orthogonal least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model optimal generalisation structure selection. The major contributions are to derive the PRESS error in a regularised orthogonal weight model, develop an efficient recursive computation formula for PRESS errors in the regularised orthogonal least squares forward regression framework and hence construct a model with a good generalisation property. Based on the properties of the PRESS statistic the proposed algorithm can achieve a fully automated model construction procedure without resort to any other validation data set for model evaluation.

  3. Fast and automatic watermark resynchronization based on zernike moments

    NASA Astrophysics Data System (ADS)

    Kang, Xiangui; Liu, Chunhui; Zeng, Wenjun; Huang, Jiwu; Liu, Congbai

    2007-02-01

    In some applications such as real-time video applications, watermark detection needs to be performed in real time. To address image watermark robustness against geometric transformations such as the combination of rotation, scaling, translation and/or cropping (RST), many prior works choose exhaustive search method or template matching method to find the RST distortion parameters, then reverse the distortion to resynchronize the watermark. These methods typically impose huge computation burden because the search space is typically a multiple dimensional space. Some other prior works choose to embed watermarks in an RST invariant domain to meet the real time requirement. But it might be difficult to construct such an RST invariant domain. Zernike moments are useful tools in pattern recognition and image watermarking due to their orthogonality and rotation invariance property. In this paper, we propose a fast watermark resynchronization method based on Zernike moments, which requires only search over scaling factor to combat RST geometric distortion, thus significantly reducing the computation load. We apply the proposed method to circularly symmetric watermarking. According to Plancherel's Theorem and the rotation invariance property of Zernike moments, the rotation estimation only requires performing DFT on Zernike moments correlation value once. Thus for RST attack, we can estimate both rotation angle and scaling factor by searching for the scaling factor to find the overall maximum DFT magnitude mentioned above. With the estimated rotation angle and scaling factor parameters, the watermark can be resynchronized. In watermark detection, the normalized correlation between the watermark and the DFT magnitude of the test image is used. Our experimental results demonstrate the advantage of our proposed method. The watermarking scheme is robust to global RST distortion as well as JPEG compression. In particular, the watermark is robust to print-rescanning and

  4. Navigation accuracy after automatic- and hybrid-surface registration in sinus and skull base surgery.

    PubMed

    Grauvogel, Tanja Daniela; Engelskirchen, Paul; Semper-Hogg, Wiebke; Grauvogel, Juergen; Laszig, Roland

    2017-01-01

    Computer-aided-surgery in ENT surgery is mainly used for sinus surgery but navigation accuracy still reaches its limits for skull base procedures. Knowledge of navigation accuracy in distinct anatomical regions is therefore mandatory. This study examined whether navigation accuracy can be improved in specific anatomical localizations by using hybrid registration technique. Experimental phantom study. Operating room. The gold standard of screw registration was compared with automatic LED-mask-registration alone, and in combination with additional surface matching. 3D-printer-based skull models with individual fabricated silicone skin were used for the experiments. Overall navigation accuracy considering 26 target fiducials distributed over each skull was measured as well as the accuracy on selected anatomic localizations. Overall navigation accuracy was <1.0 mm in all cases, showing the significantly lowest values after screw registration (0.66 ± 0.08 mm), followed by hybrid registration (0.83± 0.08 mm), and sole mask registration (0.92 ± 0.13 mm).On selected anatomic localizations screw registration was significantly superior on the sphenoid sinus and on the internal auditory canal. However, mask registration showed significantly better accuracy results on the midface. Navigation accuracy on skull base localizations could be significantly improved by the combination of mask registration and additional surface matching. Overall navigation accuracy gives no sufficient information regarding navigation accuracy in a distinct anatomic area. The non-invasive LED-mask-registration proved to be an alternative in clinical routine showing best accuracy results on the midface. For challenging skull base procedures a hybrid registration technique is recommendable which improves navigation accuracy significantly in this operating field. Invasive registration procedures are reserved for selected challenging skull base operations where the required high precision warrants the

  5. Preservation of memory-based automaticity in reading for older adults.

    PubMed

    Rawson, Katherine A; Touron, Dayna R

    2015-12-01

    Concerning age-related effects on cognitive skill acquisition, the modal finding is that older adults do not benefit from practice to the same extent as younger adults in tasks that afford a shift from slower algorithmic processing to faster memory-based processing. In contrast, Rawson and Touron (2009) demonstrated a relatively rapid shift to memory-based processing in the context of a reading task. The current research extended beyond this initial study to provide more definitive evidence for relative preservation of memory-based automaticity in reading tasks for older adults. Younger and older adults read short stories containing unfamiliar noun phrases (e.g., skunk mud) followed by disambiguating information indicating the combination's meaning (either the normatively dominant meaning or an alternative subordinate meaning). Stories were repeated across practice blocks, and then the noun phrases were presented in novel sentence frames in a transfer task. Both age groups shifted from computation to retrieval after relatively few practice trials (as evidenced by convergence of reading times for dominant and subordinate items). Most important, both age groups showed strong evidence for memory-based processing of the noun phrases in the transfer task. In contrast, older adults showed minimal shifting to retrieval in an alphabet arithmetic task, indicating that the preservation of memory-based automaticity in reading was task-specific. Discussion focuses on important implications for theories of memory-based automaticity in general and for specific theoretical accounts of age effects on memory-based automaticity, as well as fruitful directions for future research. (c) 2015 APA, all rights reserved).

  6. Integrating spatial altimetry data into the automatic calibration of hydrological models

    NASA Astrophysics Data System (ADS)

    Getirana, Augusto C. V.

    2010-06-01

    SummaryThe automatic calibration of hydrological models has traditionally been performed using gauged data. However, inaccessibility to remote areas and lack of financial support cause data to be lacking in large tropical basins, such as the Amazon basin. Advances in the acquisition, processing and availability of spatially distributed remotely sensed data move the evaluation of computational models easier and more practical. This paper presents the pioneering integration of spatial altimetry data into the automatic calibration of a hydrological model. The study area is the Branco River basin, located in the Northern Amazon basin. An empirical stage × discharge relation is obtained for the Negro River and transposed to the Branco River, which enables the correlation of spatial altimetry data with water discharge derived from the MGB-IPH hydrological model. Six scenarios are created combining two sets of objective functions with three different datasets. Two of them are composed of ENVISAT altimetric data, and the third one is derived from daily gauged discharges. The MOCOM-UA multi-criteria global optimization algorithm is used to optimize the model parameters. The calibration process is validated with gauged discharge at three gauge stations located along the Branco River and two tributaries. Results demonstrate that the combination of virtual stations along the river can provide reasonable parameters. Further, the considerably reduced number of observations provided by the satellite is not a restriction to the automatic calibration, deriving performance coefficients similar to those obtained with the process using daily gauged data.

  7. Performing label-fusion-based segmentation using multiple automatically generated templates.

    PubMed

    Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P

    2013-10-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively).

  8. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  9. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  10. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model.

    PubMed

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-06-21

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  11. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model

    NASA Astrophysics Data System (ADS)

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-06-01

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  12. Lung Lesion Extraction Using a Toboggan Based Growing Automatic Segmentation Approach.

    PubMed

    Song, Jiangdian; Yang, Caiyun; Fan, Li; Wang, Kun; Yang, Feng; Liu, Shiyuan; Tian, Jie

    2016-01-01

    The accurate segmentation of lung lesions from computed tomography (CT) scans is important for lung cancer research and can offer valuable information for clinical diagnosis and treatment. However, it is challenging to achieve a fully automatic lesion detection and segmentation with acceptable accuracy due to the heterogeneity of lung lesions. Here, we propose a novel toboggan based growing automatic segmentation approach (TBGA) with a three-step framework, which are automatic initial seed point selection, multi-constraints 3D lesion extraction and the final lesion refinement. The new approach does not require any human interaction or training dataset for lesion detection, yet it can provide a high lesion detection sensitivity (96.35%) and a comparable segmentation accuracy with manual segmentation (P > 0.05), which was proved by a series assessments using the LIDC-IDRI dataset (850 lesions) and in-house clinical dataset (121 lesions). We also compared TBGA with commonly used level set and skeleton graph cut methods, respectively. The results indicated a significant improvement of segmentation accuracy . Furthermore, the average time consumption for one lesion segmentation was under 8 s using our new method. In conclusion, we believe that the novel TBGA can achieve robust, efficient and accurate lung lesion segmentation in CT images automatically.

  13. Automatic classification of epilepsy types using ontology-based and genetics-based machine learning.

    PubMed

    Kassahun, Yohannes; Perrone, Roberta; De Momi, Elena; Berghöfer, Elmar; Tassi, Laura; Canevini, Maria Paola; Spreafico, Roberto; Ferrigno, Giancarlo; Kirchner, Frank

    2014-06-01

    In the presurgical analysis for drug-resistant focal epilepsies, the definition of the epileptogenic zone, which is the cortical area where ictal discharges originate, is usually carried out by using clinical, electrophysiological and neuroimaging data analysis. Clinical evaluation is based on the visual detection of symptoms during epileptic seizures. This work aims at developing a fully automatic classifier of epileptic types and their localization using ictal symptoms and machine learning methods. We present the results achieved by using two machine learning methods. The first is an ontology-based classification that can directly incorporate human knowledge, while the second is a genetics-based data mining algorithm that learns or extracts the domain knowledge from medical data in implicit form. The developed methods are tested on a clinical dataset of 129 patients. The performance of the methods is measured against the performance of seven clinicians, whose level of expertise is high/very high, in classifying two epilepsy types: temporal lobe epilepsy and extra-temporal lobe epilepsy. When comparing the performance of the algorithms with that of a single clinician, who is one of the seven clinicians, the algorithms show a slightly better performance than the clinician on three test sets generated randomly from 99 patients out of the 129 patients. The accuracy obtained for the two methods and the clinician is as follows: first test set 65.6% and 75% for the methods and 56.3% for the clinician, second test set 66.7% and 76.2% for the methods and 61.9% for the clinician, and third test set 77.8% for the methods and the clinician. When compared with the performance of the whole population of clinicians on the rest 30 patients out of the 129 patients, where the patients were selected by the clinicians themselves, the mean accuracy of the methods (60%) is slightly worse than the mean accuracy of the clinicians (61.6%). Results show that the methods perform at the

  14. Automatic extraction of three-dimensional thoracic aorta geometric model from phase contrast MRI for morphometric and hemodynamic characterization.

    PubMed

    Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna

    2016-02-01

    To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.

  15. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  16. Bond graph modeling, simulation, and reflex control of the Mars planetary automatic vehicle

    NASA Astrophysics Data System (ADS)

    Amara, Maher; Friconneau, Jean Pierre; Micaelli, Alain

    1993-01-01

    The bond graph modeling, simulation, and reflex control study of the Planetary Automatic Vehicle are considered. A simulator derived from a complete bond graph model of the vehicle is presented. This model includes both knowledge and representation models of the mechanical structure, the floor contact, and the Mars site. The MACSYMEN (French acronym for aided design method of multi-energetic systems) is used and applied to study the input-output power transfers. The reflex control is then considered. Controller architecture and locomotion specificity are described. A numerical stage highlights some interesting results of the robot and the controller capabilities.

  17. Automatic reconstruction of physiological gestures used in a model of birdsong production

    PubMed Central

    Boari, Santiago; Perl, Yonatan Sanz; Margoliash, Daniel; Mindlin, Gabriel B.

    2015-01-01

    Highly coordinated learned behaviors are key to understanding neural processes integrating the body and the environment. Birdsong production is a widely studied example of such behavior in which numerous thoracic muscles control respiratory inspiration and expiration: the muscles of the syrinx control syringeal membrane tension, while upper vocal tract morphology controls resonances that modulate the vocal system output. All these muscles have to be coordinated in precise sequences to generate the elaborate vocalizations that characterize an individual's song. Previously we used a low-dimensional description of the biomechanics of birdsong production to investigate the associated neural codes, an approach that complements traditional spectrographic analysis. The prior study used algorithmic yet manual procedures to model singing behavior. In the present work, we present an automatic procedure to extract low-dimensional motor gestures that could predict vocal behavior. We recorded zebra finch songs and generated synthetic copies automatically, using a biomechanical model for the vocal apparatus and vocal tract. This dynamical model described song as a sequence of physiological parameters the birds control during singing. To validate this procedure, we recorded electrophysiological activity of the telencephalic nucleus HVC. HVC neurons were highly selective to the auditory presentation of the bird's own song (BOS) and gave similar selective responses to the automatically generated synthetic model of song (AUTO). Our results demonstrate meaningful dimensionality reduction in terms of physiological parameters that individual birds could actually control. Furthermore, this methodology can be extended to other vocal systems to study fine motor control. PMID:26378204

  18. An automatic enzyme immunoassay based on a chemiluminescent lateral flow immunosensor.

    PubMed

    Joung, Hyou-Arm; Oh, Young Kyoung; Kim, Min-Gon

    2014-03-15

    Microfluidic integrated enzyme immunosorbent assay (EIA) sensors are efficient systems for point-of-care testing (POCT). However, such systems are not only relatively expensive but also require a complicated manufacturing process. Therefore, additional fluidic control systems are required for the implementation of EIAs in a lateral flow immunosensor (LFI) strip sensor. In this study, we describe a novel LFI for EIA, the use of which does not require additional steps such as mechanical fluidic control, washing, or injecting. The key concept relies on a delayed-release effect of chemiluminescence substrates (luminol enhancer and hydrogen peroxide generator) by an asymmetric polysulfone membrane (ASPM). When the ASPM was placed between the nitrocellulose (NC) membrane and the substrate pad, substrates encapsulated in the substrate pad were released after 5.3 ± 0.3 min. Using this delayed-release effect, we designed and implemented the chemiluminescent LFI-based automatic EIA system, which sequentially performed the immunoreaction, pH change, substrate release, hydrogen peroxide generation, and chemiluminescent reaction with only 1 sample injection. In a model study, implementation of the sensor was validated by measuring the high sensitivity C-reactive protein (hs-CRP) level in human serum.

  19. An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts

    PubMed Central

    Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S.; Mangin, Jean-Francois; Seong, Joon-Kyung

    2015-01-01

    We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively. PMID:26225419

  20. Automatic extraction of facial interest points based on 2D and 3D data

    NASA Astrophysics Data System (ADS)

    Erdogmus, Nesli; Dugelay, Jean-Luc

    2011-03-01

    Facial feature points are one of the most important clues for many computer vision applications such as face normalization, registration and model-based human face coding. Hence, automating the extraction of these points would have a wide range of usage. In this paper, we aim to detect a subset of Facial Definition Parameters (FDPs) defined in MPEG-4 automatically by utilizing both 2D and 3D face data. The main assumption in this work is that the 2D images and the corresponding 3D scans are taken for frontal faces with neutral expressions. This limitation is realistic with respect to our scenario, in which the enrollment is done in a controlled environment and the detected FDP points are to be used for the warping and animation of the enrolled faces [1] where the choice of MPEG-4 FDP is justified. For the extraction of the points, 2D, 3D data or both is used according to the distinctive information they carry in that particular facial region. As a result, total number of 29 interest points is detected. The method is tested on the neutral set of Bosphorus database that includes 105 subjects with registered 3D scans and color images.

  1. A robust automatic birdsong phrase classification: A template-based approach.

    PubMed

    Kaewtip, Kantapon; Alwan, Abeer; O'Reilly, Colm; Taylor, Charles E

    2016-11-01

    Automatic phrase detection systems of bird sounds are useful in several applications as they reduce the need for manual annotations. However, birdphrase detection is challenging due to limited training data and background noise. Limited data occur because of limited recordings or the existence of rare phrases. Background noise interference occurs because of the intrinsic nature of the recording environment such as wind or other animals. This paper presents a different approach to birdsong phrase classification using template-based techniques suitable even for limited training data and noisy environments. The algorithm utilizes dynamic time-warping (DTW) and prominent (high-energy) time-frequency regions of training spectrograms to derive templates. The performance of the proposed algorithm is compared with the traditional DTW and hidden Markov models (HMMs) methods under several training and test conditions. DTW works well when the data are limited, while HMMs do better when more data are available, yet they both suffer when the background noise is severe. The proposed algorithm outperforms DTW and HMMs in most training and testing conditions, usually with a high margin when the background noise level is high. The innovation of this work is that the proposed algorithm is robust to both limited training data and background noise.

  2. Double-channel on-line automatic fruit grading system based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Junxiong; Xun, Yi; Li, Wei; Zhang, Cong

    2007-01-01

    The technology of fruit grading based on computer vision was studied and a double-channel on-line automatic grading system was built. The process of grading included fruit image acquiring, image processing and fruit tracking and separating. In the first section, a new approach of image grabbing by employing an asynchronous reset camera was presented. Three images of the different surfaces of each fruit would be collected by rolling the fruits when they passed through the image-capturing area. To acquire clear images, high-frequency fluorescent lamps supplied by three-phase alternating current were used to illuminate. In the image processing section, the diameter and a color model were used to identify the grade of the fruits. Fruits were graded into four grades by size, and two by color. Each fruit identified was tracked and separated by a novel algorithm which was realized with a PLC (Program Logic Controller). The whole grading system was tested with 1000 citrus. It could work stably when the grading capability was twelve citrus per second and the grading level was nine. The on-line grading results indicated that the accuracy of tracking and separating was higher than 99%, and the ultimate grading error was less than 3%.

  3. An automatic detection method for the boiler pipe header based on real-time image acquisition

    NASA Astrophysics Data System (ADS)

    Long, Yi; Liu, YunLong; Qin, Yongliang; Yang, XiangWei; Li, DengKe; Shen, DingJie

    2017-06-01

    Generally, an endoscope is used to test the inner part of the thermal power plants boiler pipe header. However, since the endoscope hose manual operation, the length and angle of the inserted probe cannot be controlled. Additionally, it has a big blind spot observation subject to the length of the endoscope wire. To solve these problems, an automatic detection method for the boiler pipe header based on real-time image acquisition and simulation comparison techniques was proposed. The magnetic crawler with permanent magnet wheel could carry the real-time image acquisition device to complete the crawling work and collect the real-time scene image. According to the obtained location by using the positioning auxiliary device, the position of the real-time detection image in a virtual 3-D model was calibrated. Through comparing of the real-time detection images and the computer simulation images, the defects or foreign matter fall into could be accurately positioning, so as to repair and clean up conveniently.

  4. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  5. Implement the RFID position based system of automatic tablets packaging machine for patient safety.

    PubMed

    Chang, Ching-Hsiang; Lai, Yeong-Lin; Chen, Chih-Cheng

    2012-12-01

    Patient safety has been regarded as the most important quality policy of hospital management. The medicine dispensing definitely plays an influential role in the Joint Commission International Accreditation Standards. The problem we are going to discuss in this paper is that the function of detecting mistakes does not exist in the Automatic Tablets packaging machine (ATPM) in the hospital pharmacy department when the pharmacists implement the replenishment of cassettes. In this situation, there are higher possibilities of placing the wrong cassettes back to the wrong positions, so that the human errors will lead to a crucial impact on total inpatients undoubtedly. Therefore, this study aims to design the RFID (Radio frequency identification) position based system (PBS) for the ATPM with passive high frequency (HF) model. At first, we placed the HF tags on each cassette and installed the HF readers on the cabinets for each position. Then, the system works on the reading loop to verify ID numbers and positions on each cassette. Next, the system would detect whether the orbit opens or not and controls the readers' working power consumption for drug storage temperature. Finally, we use the RFID PBS of the ATPM to achieve the goal of avoiding the medication errors at any time for patient safety.

  6. An automatic framework for quantitative validation of voxel based morphometry measures of anatomical brain asymmetry.

    PubMed

    Pepe, Antonietta; Dinov, Ivo; Tohka, Jussi

    2014-10-15

    The study of anatomical brain asymmetries has been a topic of great interest in the neuroimaging community in the past decades. However, the accuracy of brain asymmetry measurements has been rarely investigated. In this study, we propose a fully automatic methodology for the quantitative validation of brain tissue asymmetries as measured by Voxel Based Morphometry (VBM) from structural magnetic resonance (MR) images. Starting from a real MR image, the methodology generates simulated 3D MR images with a known and realistic pattern of inter-hemispheric asymmetry that models the left-occipital right-frontal petalia of a normal brain and the related rightward bending of the inter-hemispheric fissure. As an example, we generated a dataset of 64 simulated MR images and applied this dataset for the quantitative validation of optimized VBM measures of asymmetries in brain tissue composition. Our results suggested that VBM analysis strongly depended on the spatial normalization of the individual brain images, the selected template space, and the amount of spatial smoothing applied. The most accurate asymmetry detections were achieved by 9-degrees of freedom registration to the symmetrical template space with 4 to 8mm spatial smoothing.

  7. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    NASA Astrophysics Data System (ADS)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  8. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  9. Automatic Three-Dimensional Measurement of Large-Scale Structure Based on Vision Metrology

    PubMed Central

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods. PMID:24701143

  10. Automatic detection of sleep macrostructure based on a sensorized T-shirt.

    PubMed

    Bianchi, Anna M; Mendez, Martin O

    2010-01-01

    In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals.

  11. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light.

    PubMed

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-28

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm(-2) and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications.

  12. Automatic face detection and tracking based on Adaboost with camshift algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Long, JianFeng

    2011-10-01

    With the development of information technology, video surveillance is widely used in security monitoring and identity recognition. For most of pure face tracking algorithms are hard to specify the initial location and scale of face automatically, this paper proposes a fast and robust method to detect and track face by combining adaboost with camshift algorithm. At first, the location and scale of face is specified by adaboost algorithm based on Haar-like features and it will be conveyed to the initial search window automatically. Then, we apply camshift algorithm to track face. The experimental results based on OpenCV software yield good results, even in some special circumstances, such as light changing and face rapid movement. Besides, by drawing out the tracking trajectory of face movement, some abnormal behavior events can be analyzed.

  13. Automatic Articular Cartilage Segmentation Based on Pattern Recognition from Knee MRI Images.

    PubMed

    Pang, Jianfei; Li, PengYue; Qiu, Mingguo; Chen, Wei; Qiao, Liang

    2015-12-01

    An automatic method for cartilage segmentation using knee MRI images is described. Three binary classifiers with integral and partial pixel features are built using the Bayesian theorem to segment the femoral cartilage, tibial cartilage and patellar cartilage separately. First, an iterative procedure based on the feedback of the number of strong edges is designed to obtain an appropriate threshold for the Canny operator and to extract the bone-cartilage interface from MRI images. Second, the different edges are identified based on certain features, which allow for different cartilage to be distinguished synchronously. The cartilage is segmented preliminarily with minimum error Bayesian classifiers that have been previously trained. According to the cartilage edge and its anatomic location, the speed of segmentation is improved. Finally, morphological operations are used to improve the primary segmentation results. The cartilage edge is smooth in the automatic segmentation results and shows good consistency with manual segmentation results. The mean Dice similarity coefficient is 0.761.

  14. Automatic diet monitoring: a review of computer vision and wearable sensor-based methods.

    PubMed

    Hassannejad, Hamid; Matrella, Guido; Ciampolini, Paolo; De Munari, Ilaria; Mordonini, Monica; Cagnoni, Stefano

    2017-01-31

    Food intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal.

  15. Automatic resolution of ambiguous terms based on machine learning and conceptual relations in the UMLS.

    PubMed

    Liu, Hongfang; Johnson, Stephen B; Friedman, Carol

    2002-01-01

    Motivation. The UMLS has been used in natural language processing applications such as information retrieval and information extraction systems. The mapping of free-text to UMLS concepts is important for these applications. To improve the mapping, we need a method to disambiguate terms that possess multiple UMLS concepts. In the general English domain, machine-learning techniques have been applied to sense-tagged corpora, in which senses (or concepts) of ambiguous terms have been annotated (mostly manually). Sense disambiguation classifiers are then derived to determine senses (or concepts) of those ambiguous terms automatically. However, manual annotation of a corpus is an expensive task. We propose an automatic method that constructs sense-tagged corpora for ambiguous terms in the UMLS using MEDLINE abstracts. For a term W that represents multiple UMLS concepts, a collection of MEDLINE abstracts that contain W is extracted. For each abstract in the collection, occurrences of concepts that have relations with W as defined in the UMLS are automatically identified. A sense-tagged corpus, in which senses of W are annotated, is then derived based on those identified concepts. The method was evaluated on a set of 35 frequently occurring ambiguous biomedical abbreviations using a gold standard set that was automatically derived. The quality of the derived sense-tagged corpus was measured using precision and recall. The derived sense-tagged corpus had an overall precision of 92.9% and an overall recall of 47.4%. After removing rare senses and ignoring abbreviations with closely related senses, the overall precision was 96.8% and the overall recall was 50.6%. UMLS conceptual relations and MEDLINE abstracts can be used to automatically acquire knowledge needed for resolving ambiguity when mapping free-text to UMLS concepts.

  16. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light

    NASA Astrophysics Data System (ADS)

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-01

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications.Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00759g

  17. [A wavelet-transform-based method for the automatic detection of late-type stars].

    PubMed

    Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao

    2005-07-01

    The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.

  18. Combination of automatic non-rigid and landmark based registration: the best of both worlds

    NASA Astrophysics Data System (ADS)

    Fischer, Bernd; Modersitzki, Jan

    2003-05-01

    Automatic, parameter-free, and non-rigid registration schemes are known to be valuable tools in various (medical) image processing applications. Typically, these approaches aim to match intensity patterns in each scan by minimizing an appropriate distance measure. The outcome of an automatic registration procedure in general matches the target image quite good on the average. However, it may be inaccurate for specific, important locations as for example anatomical landmarks. On the other hand, landmark based registration techniques are designed to accurately match user specified landmarks. A drawback of landmark based registration is that the intensities of the images are completely neglected. Consequently, the registration result away from the landmarks may be very poor. Here we propose a framework for novel registration techniques which are capable to combine automatic and landmark driven approaches in order to benefit from the advantages of both strategies. We also propose a general, mathematical treatment of this framework and a particular implementation. The procedure computes a displacement field which is guaranteed to produce a one-to-one match between given landmarks and at the smae time minimizes an intensity based measure for the remaining parts of the images. The properties of the new scheme are demonstrated for a variety of numerical example. It is worthwhile noticing, that we not only present a new approach. Instead, we propose a general framework for a variety of different approaches. The choice of the main building blocks, the distance measure and the smoothness constraint, is essentially free.

  19. A Computational Agent Model of Automaticity for Driver’s Training

    NASA Astrophysics Data System (ADS)

    Mustapha, Rabi; Yusof, Yuhanis; Aziz, Azizi Ab

    2017-08-01

    Driver’s training is essential in order to assess and provide the driver with sufficient skills to handle vehicles in complex and dynamic environment. These skills are related to the cognitive factors that will influence the automaticity of the driver to make effective decision. To illustrate these skills, simulation scenarios based on driver’s training has been conducted. It has pointed out that the simulation results are related to the existing concepts that can be found in literatures.

  20. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    NASA Astrophysics Data System (ADS)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.