Science.gov

Sample records for automatic model based

  1. Octree based automatic meshing from CSG models

    NASA Technical Reports Server (NTRS)

    Perucchio, Renato

    1987-01-01

    Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is emphasized. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary respresentation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractors. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.

  2. Model-Based Reasoning in Humans Becomes Automatic with Training

    PubMed Central

    Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J.

    2015-01-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load—a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239

  3. Model-based automatic generation of grasping regions

    NASA Technical Reports Server (NTRS)

    Bloss, David A.

    1993-01-01

    The problem of automatically generating stable regions for a robotic end effector on a target object, given a model of the end effector and the object is discussed. In order to generate grasping regions, an initial valid grasp transformation from the end effector to the object is obtained based on form closure requirements, and appropriate rotational and translational symmetries are associated with that transformation in order to construct a valid, continuous grasping region. The main result of this algorithm is a list of specific, valid grasp transformations of the end effector to the target object, and the appropriate combinations of translational and rotational symmetries associated with each specific transformation in order to produce a continuous grasp region.

  4. Automatic sensor placement for model-based robot vision.

    PubMed

    Chen, S Y; Li, Y F

    2004-02-01

    This paper presents a method for automatic sensor placement for model-based robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple three-dimensional (3-D) images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a min-max criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method. PMID:15369081

  5. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  6. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  7. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    NASA Astrophysics Data System (ADS)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  8. Towards automatic calibration of hydrodynamic models - evaluation of gradient based optimisers

    NASA Astrophysics Data System (ADS)

    Fabio, Pamela; Apel, Heiko; Aronica, Giuseppe T.

    2010-05-01

    The calibration of two-dimensional hydraulic models is still underdeveloped in the present survey of scientific research. They are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. Moreover, the lack of relevant data against the models can be calibrated has ever to be accounted. The present study considers a serious and well documented flood event that occurred on August 2002 on the river Mulde in the city of Eilenburg in Saxony, Germany. The application of the parallel version of the model gradient-based optimiser PEST, that gives the possibility of automatic and model independent calibrations, is here presented, and different calibration strategies, adopting different aggregation levels of the spatially distributed surface roughness parameters, are compared. Gradient-based methods are often criticized because they can be sensitive to the initial parameter values and might get trapped in a local minimum of objective functions. But on the other hand they are computational very efficient and may be the only possibility to automatically calibrate CPU time demanding models like 2D hydraulic models. In order to test the performance of the gradient based optimiser the optimisation results were compared with a sensitivity analysis testing the whole parameters space through a Latin hypercube sampling, thus emulating a global optimiser. The results show that it is possible to use automatic calibration in combination of 2D hydraulic model, and that equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. Also the sensitivity analysis showed that the gradient based optimiser was always able to find the global minimum. Based on these first results it can be concluded that a gradient based optimiser appears to be a viable and appropriate choice for automatic calibration of

  9. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  10. Automatic sleep staging based on ECG signals using hidden Markov models.

    PubMed

    Ying Chen; Xin Zhu; Wenxi Chen

    2015-08-01

    This study is designed to investigate the feasibility of automatic sleep staging using features only derived from electrocardiography (ECG) signal. The study was carried out using the framework of hidden Markov models (HMMs). The mean, and SD values of heart rates (HRs) computed from each 30-second epoch served as the features. The two feature sequences were first detrended by ensemble empirical mode decomposition (EEMD), formed as a two-dimensional feature vector, and then converted into code vectors by vector quantization (VQ) method. The output VQ indexes were utilized to estimate parameters for HMMs. The proposed model was tested and evaluated on a group of healthy individuals using leave-one-out cross-validation. The automatic sleep staging results were compared with PSG estimated ones. Results showed accuracies of 82.2%, 76.0%, 76.1% and 85.5% for deep, light, REM and wake sleep, respectively. The findings proved that HRs-based HMM approach is feasible for automatic sleep staging and can pave a way for developing more efficient, robust, and simple sleep staging system suitable for home application. PMID:26736316

  11. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    SciTech Connect

    Dr. Carl Stern; Dr. Martin Lee

    1999-06-28

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  12. Automatic quantitative analysis of ultrasound tongue contours via wavelet-based functional mixed models.

    PubMed

    Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S

    2015-02-01

    This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms. PMID:25698047

  13. Model-based vision system for automatic recognition of structures in dental radiographs

    NASA Astrophysics Data System (ADS)

    Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.

    1991-07-01

    X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.

  14. Chinese Automatic Question Answering System of Specific-domain Based on Vector Space Model

    NASA Astrophysics Data System (ADS)

    Hu, Haiqing; Ren, Fuji; Kuroiwa, Shingo

    In order to meet the demand to acquire necessary information efficiently from large electronic text, the Question and Answering (QA) technology to show a clear reply automatically to a question asked in the user's natural language has widely attracted attention in recent years. Although the research of QA system in China is later than that in western countries and Japan, it has attracted more and more attention recently. In this paper, we propose a Question-Answering construction, which synthesizes the answer retrieval to the questions asked most frequently based on common knowledge, and the document retrieval concerning sightseeing information. In order to improve reply accuracy, one must consider the synthetic model based on statistic VSM and the shallow semantic analysis, and the domain is limited to sightseeing information. A Chinese QA system about sightseeing based on the proposed method has been built. The result is obtained by evaluation experiments, where high accuracy can be achieved when the results of retrieval were regarded as correct, if the correct answer appeared among those of the top three resemblance degree. The experiments proved the efficiency of our method and it is feasible to develop Question-Answering technology based on this method.

  15. Modelling Pasture-based Automatic Milking System Herds: Grazeable Forage Options.

    PubMed

    Islam, M R; Garcia, S C; Clark, C E F; Kerrisk, K L

    2015-05-01

    One of the challenges to increase milk production in a large pasture-based herd with an automatic milking system (AMS) is to grow forages within a 1-km radius, as increases in walking distance increases milking interval and reduces yield. The main objective of this study was to explore sustainable forage option technologies that can supply high amount of grazeable forages for AMS herds using the Agricultural Production Systems Simulator (APSIM) model. Three different basic simulation scenarios (with irrigation) were carried out using forage crops (namely maize, soybean and sorghum) for the spring-summer period. Subsequent crops in the three scenarios were forage rape over-sown with ryegrass. Each individual simulation was run using actual climatic records for the period from 1900 to 2010. Simulated highest forage yields in maize, soybean and sorghum- (each followed by forage rape-ryegrass) based rotations were 28.2, 22.9, and 19.3 t dry matter/ha, respectively. The simulations suggested that the irrigation requirement could increase by up to 18%, 16%, and 17% respectively in those rotations in El-Niño years compared to neutral years. On the other hand, irrigation requirement could increase by up to 25%, 23%, and 32% in maize, soybean and sorghum based rotations in El-Nino years compared to La-Nina years. However, irrigation requirement could decrease by up to 8%, 7%, and 13% in maize, soybean and sorghum based rotations in La-Nina years compared to neutral years. The major implication of this study is that APSIM models have potentials in devising preferred forage options to maximise grazeable forage yield which may create the opportunity to grow more forage in small areas around the AMS which in turn will minimise walking distance and milking interval and thus increase milk production. Our analyses also suggest that simulation analysis may provide decision support during climatic uncertainty. PMID:25924963

  16. Modelling Pasture-based Automatic Milking System Herds: Grazeable Forage Options

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    One of the challenges to increase milk production in a large pasture-based herd with an automatic milking system (AMS) is to grow forages within a 1-km radius, as increases in walking distance increases milking interval and reduces yield. The main objective of this study was to explore sustainable forage option technologies that can supply high amount of grazeable forages for AMS herds using the Agricultural Production Systems Simulator (APSIM) model. Three different basic simulation scenarios (with irrigation) were carried out using forage crops (namely maize, soybean and sorghum) for the spring-summer period. Subsequent crops in the three scenarios were forage rape over-sown with ryegrass. Each individual simulation was run using actual climatic records for the period from 1900 to 2010. Simulated highest forage yields in maize, soybean and sorghum- (each followed by forage rape-ryegrass) based rotations were 28.2, 22.9, and 19.3 t dry matter/ha, respectively. The simulations suggested that the irrigation requirement could increase by up to 18%, 16%, and 17% respectively in those rotations in El-Niño years compared to neutral years. On the other hand, irrigation requirement could increase by up to 25%, 23%, and 32% in maize, soybean and sorghum based rotations in El-Nino years compared to La-Nina years. However, irrigation requirement could decrease by up to 8%, 7%, and 13% in maize, soybean and sorghum based rotations in La-Nina years compared to neutral years. The major implication of this study is that APSIM models have potentials in devising preferred forage options to maximise grazeable forage yield which may create the opportunity to grow more forage in small areas around the AMS which in turn will minimise walking distance and milking interval and thus increase milk production. Our analyses also suggest that simulation analysis may provide decision support during climatic uncertainty. PMID:25924963

  17. Automatic mathematical modeling for space application

    NASA Technical Reports Server (NTRS)

    Wang, Caroline K.

    1987-01-01

    A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.

  18. Image segmentation for automatic particle identification in electron micrographs based on hidden Markov random field models and expectation maximization

    PubMed Central

    Singh, Vivek; Marinescu, Dan C.; Baker, Timothy S.

    2014-01-01

    Three-dimensional reconstruction of large macromolecules like viruses at resolutions below 10 ÅA requires a large set of projection images. Several automatic and semi-automatic particle detection algorithms have been developed along the years. Here we present a general technique designed to automatically identify the projection images of particles. The method is based on Markov random field modelling of the projected images and involves a pre-processing of electron micrographs followed by image segmentation and post-processing. The image is modelled as a coupling of two fields—a Markovian and a non-Markovian. The Markovian field represents the segmented image. The micrograph is the non-Markovian field. The image segmentation step involves an estimation of coupling parameters and the maximum áa posteriori estimate of the realization of the Markovian field i.e, segmented image. Unlike most current methods, no bootstrapping with an initial selection of particles is required. PMID:15065680

  19. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    NASA Astrophysics Data System (ADS)

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.

  20. Automatic detection of echolocation clicks based on a Gabor model of their waveform.

    PubMed

    Madhusudhana, Shyam; Gavrilov, Alexander; Erbe, Christine

    2015-06-01

    Prior research has shown that echolocation clicks of several species of terrestrial and marine fauna can be modelled as Gabor-like functions. Here, a system is proposed for the automatic detection of a variety of such signals. By means of mathematical formulation, it is shown that the output of the Teager-Kaiser Energy Operator (TKEO) applied to Gabor-like signals can be approximated by a Gaussian function. Based on the inferences, a detection algorithm involving the post-processing of the TKEO outputs is presented. The ratio of the outputs of two moving-average filters, a Gaussian and a rectangular filter, is shown to be an effective detection parameter. Detector performance is assessed using synthetic and real (taken from MobySound database) recordings. The detection method is shown to work readily with a variety of echolocation clicks and in various recording scenarios. The system exhibits low computational complexity and operates several times faster than real-time. Performance comparisons are made to other publicly available detectors including pamguard. PMID:26093399

  1. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  2. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  3. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    SciTech Connect

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  4. Automatic left-atrial segmentation from cardiac 3D ultrasound: a dual-chamber model-based approach

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno; Sarvari, Sebastian I.; Orderud, Fredrik; Gérard, Olivier; D'hooge, Jan; Samset, Eigil

    2016-04-01

    In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of a priori anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03+/-0.6 mm). The AV plane was detected with an accuracy of -0.6+/-1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean +/-1.96 SD): 0.4+/-5.3 ml, 2.1+/-12.6 ml, and 1.5+/-7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.

  5. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  6. Automatic Sex Determination of Skulls Based on a Statistical Shape Model

    PubMed Central

    Luo, Li; Wang, Mengyang; Tian, Yun; Duan, Fuqing; Wu, Zhongke; Zhou, Mingquan; Rozenholc, Yves

    2013-01-01

    Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy. PMID:24312134

  7. Automatic annotation of histopathological images using a latent topic model based on non-negative matrix factorization

    PubMed Central

    Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.

    2011-01-01

    Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960

  8. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  9. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  10. An Automatic Image-Based Modelling Method Applied to Forensic Infography

    PubMed Central

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  11. An approach of crater automatic recognition based on contour digital elevation model from Chang'E Missions

    NASA Astrophysics Data System (ADS)

    Zuo, W.; Li, C.; Zhang, Z.; Li, H.; Feng, J.

    2015-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). First, we mapped 16-bits DEM to 256 gray scales for data compression, then for the purposes of better visualization, the grayscale is converted into RGB image. After that, a median filter is applied twice to DEM for data optimization, which produced smooth, continuous outlines for subsequent construction of contour plane. Considering the fact that the morphology of crater on contour plane can be approximately expressed as an ellipse or circle, we extract the outer boundaries of contour plane with the same color(gray value) as targets for further identification though a 8- neighborhood counterclockwise searching method. Then, A library of training samples is constructed based on above targets calculated from some sample DEM data, from which real crater targets are labeled as positive samples manually, and non-crater objects are labeled as negative ones. Some morphological feathers are calculated for all these samples, which are major axis (L), circumference(C), area inside the boundary(S), and radius of the largest inscribed circle(R). We use R/L, R/S, C/L, C/S, R/C, S/L as the key factors for identifying craters, and apply Fisher discrimination method on the sample library to calculate the weight of each factor and determine the discrimination formula, which is then applied to DEM data for identifying lunar craters. The method has been tested and verified with DEM data from CE-1 and CE-2, showing strong recognition ability and robustness and is applicable for the recognition of craters with various diameters and significant morphological differences, making fast and accurate automatic crater recognition possible.

  12. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  13. Automatic model-based roentgen stereophotogrammetric analysis (RSA) of total knee prostheses.

    PubMed

    Syu, Ci-Bin; Lai, Jiing-Yih; Chang, Ren-Yi; Shih, Kao-Shang; Chen, Kuo-Jen; Lin, Shang-Chih

    2012-01-01

    Conventional radiography is insensitive for early and accurate estimation of the mal-alignment and wear of knee prostheses. The two-staged (rough and fine) registration of the model-based RSA technique has recently been developed to in vivo estimate the prosthetic pose (i.e, location and orientation). In the literature, rough registration often uses template match or manual adjustment of the roentgen images. Additionally, possible error induced by the nonorthogonality of taking two roentgen images neither examined nor calibrated prior to fine registration. This study developed two RSA methods for automate the estimation of the prosthetic pose and decrease the nonorthogonality-induced error. The predicted results were validated by both simulative and experimental tests and compared with reported findings in the literature. The outcome revealed that the feature-recognized method automates pose estimation and significantly increases the execution efficiency up to about 50 times in comparison with the literature counterparts. Although the nonorthogonal images resulted in undesirable errors, the outline-optimized method can effectively compensate for the induced errors prior to fine registration. The superiority in automation, efficiency, and accuracy demonstrated the clinical practicability of the two proposed methods especially for the numerous fluoroscopic images of dynamic motion. PMID:22093794

  14. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  15. Automatic Detection of Student Mental Models Based on Natural Language Student Input during Metacognitive Skill Training

    ERIC Educational Resources Information Center

    Lintean, Mihai; Rus, Vasile; Azevedo, Roger

    2012-01-01

    This article describes the problem of detecting the student mental models, i.e. students' knowledge states, during the self-regulatory activity of prior knowledge activation in MetaTutor, an intelligent tutoring system that teaches students self-regulation skills while learning complex science topics. The article presents several approaches to…

  16. The Research on Automatic Construction of Domain Model Based on Deep Web Query Interfaces

    NASA Astrophysics Data System (ADS)

    JianPing, Gu

    The integration of services is transparent, meaning that users no longer face the millions of Web services, do not care about the required data stored, but do not need to learn how to obtain these data. In this paper, we analyze the uncertainty of schema matching, and then propose a series of similarity measures. To reduce the cost of execution, we propose the type-based optimization method and schema matching pruning method of numeric data. Based on above analysis, we propose the uncertain schema matching method. The experiments prove the effectiveness and efficiency of our method.

  17. Modeling complexity in pathologist workload measurement: the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS).

    PubMed

    Cheung, Carol C; Torlakovic, Emina E; Chow, Hung; Snover, Dale C; Asa, Sylvia L

    2015-03-01

    Pathologists provide diagnoses relevant to the disease state of the patient and identify specific tissue characteristics relevant to response to therapy and prognosis. As personalized medicine evolves, there is a trend for increased demand of tissue-derived parameters. Pathologists perform increasingly complex analyses on the same 'cases'. Traditional methods of workload assessment and reimbursement, based on number of cases sometimes with a modifier (eg, the relative value unit (RVU) system used in the United States), often grossly underestimate the amount of work needed for complex cases and may overvalue simple, small biopsy cases. We describe a new approach to pathologist workload measurement that aligns with this new practice paradigm. Our multisite institution with geographically diverse partner institutions has developed the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS) model that captures pathologists' clinical activities from parameters documented in departmental laboratory information systems (LISs). The model's algorithm includes: 'capture', 'export', 'identify', 'count', 'score', 'attribute', 'filter', and 'assess filtered results'. Captured data include specimen acquisition, handling, analysis, and reporting activities. Activities were counted and complexity units (CUs) generated using a complexity factor for each activity. CUs were compared between institutions, practice groups, and practice types and evaluated over a 5-year period (2008-2012). The annual load of a clinical service pathologist, irrespective of subspecialty, was ∼40,000 CUs using relative benchmarking. The model detected changing practice patterns and was appropriate for monitoring clinical workload for anatomical pathology, neuropathology, and hematopathology in academic and community settings, and encompassing subspecialty and generalist practices. AABACUS is objective, can be integrated with an LIS and automated, is reproducible, backwards compatible

  18. A neurocomputational model of automatic sequence production.

    PubMed

    Helie, Sebastien; Roeder, Jessica L; Vucovich, Lauren; Rünger, Dennis; Ashby, F Gregory

    2015-07-01

    Most behaviors unfold in time and include a sequence of submovements or cognitive activities. In addition, most behaviors are automatic and repeated daily throughout life. Yet, relatively little is known about the neurobiology of automatic sequence production. Past research suggests a gradual transfer from the associative striatum to the sensorimotor striatum, but a number of more recent studies challenge this role of the BG in automatic sequence production. In this article, we propose a new neurocomputational model of automatic sequence production in which the main role of the BG is to train cortical-cortical connections within the premotor areas that are responsible for automatic sequence production. The new model is used to simulate four different data sets from human and nonhuman animals, including (1) behavioral data (e.g., RTs), (2) electrophysiology data (e.g., single-neuron recordings), (3) macrostructure data (e.g., TMS), and (4) neurological circuit data (e.g., inactivation studies). We conclude with a comparison of the new model with existing models of automatic sequence production and discuss a possible new role for the BG in automaticity and its implication for Parkinson's disease. PMID:25671503

  19. Impedance based automatic electrode positioning.

    PubMed

    Miklody, Daniel; Hohne, Johannes

    2015-08-01

    The position of electrodes in electrical imaging and stimulation of the human brain is an important variable with vast influences on the precision in modeling approaches. Nevertheless, the exact position is obscured by many factors. 3-D Digitization devices can measure the distribution over the scalp surface but remain uncomfortable in application and often imprecise. We demonstrate a new approach that uses solely the impedance information between the electrodes to determine the geometric position. The algorithm involves multidimensional scaling to create a 3 dimensional space based on these impedances. The success is demonstrated in a simulation study. An average electrode position error of 1.67cm over all 6 subjects could be achieved. PMID:26736345

  20. Automatic enrollment for gait-based person re-identification

    NASA Astrophysics Data System (ADS)

    Ortells, Javier; Martín-Félez, Raúl; Mollineda, Ramón A.

    2015-02-01

    Automatic enrollment involves a critical decision-making process within people re-identification context. However, this process has been traditionally undervalued. This paper studies the problem of automatic person enrollment from a realistic perspective relying on gait analysis. Experiments simulating random flows of people with considerable appearance variations between different observations of a person have been conducted, modeling both short- and longterm scenarios. Promising results based on ROC analysis show that automatically enrolling people by their gait is affordable with high success rates.

  1. An Automatic Segmentation and Classification Framework Based on PCNN Model for Single Tooth in MicroCT Images

    PubMed Central

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2016-01-01

    Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing three dimensional (3D) information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN) model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods. PMID:27322421

  2. AUTOMATIC CALIBRATION OF A DISRIBUTED CATCHMENT MODEL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Parameters of hydrologic models often are not exactly known and therefore have to be determined by calibration. A manual calibration depends on the subjective assessment of the modeler and can be very time-consuming though. Methods of automatic calibration can improve these shortcomings. Yet, the...

  3. Automatic estimation of midline shift in patients with cerebral glioma based on enhanced voigt model and local symmetry.

    PubMed

    Chen, Mingyang; Elazab, Ahmed; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Li, Xiaodong; Hu, Qingmao

    2015-12-01

    Cerebral glioma is one of the most aggressive space-occupying diseases, which will exhibit midline shift (MLS) due to mass effect. MLS has been used as an important feature for evaluating the pathological severity and patients' survival possibility. Automatic quantification of MLS is challenging due to deformation, complex shape and complex grayscale distribution. An automatic method is proposed and validated to estimate MLS in patients with gliomas diagnosed using magnetic resonance imaging (MRI). The deformed midline is approximated by combining mechanical model and local symmetry. An enhanced Voigt model which takes into account the size and spatial information of lesion is devised to predict the deformed midline. A composite local symmetry combining local intensity symmetry and local intensity gradient symmetry is proposed to refine the predicted midline within a local window whose size is determined according to the pinhole camera model. To enhance the MLS accuracy, the axial slice with maximum MSL from each volumetric data has been interpolated from a spatial resolution of 1 mm to 0.33 mm. The proposed method has been validated on 30 publicly available clinical head MRI scans presenting with MLS. It delineates the deformed midline with maximum MLS and yields a mean difference of 0.61 ± 0.27 mm, and average maximum difference of 1.89 ± 1.18 mm from the ground truth. Experiments show that the proposed method will yield better accuracy with the geometric center of pathology being the geometric center of tumor and the pathological region being the whole lesion. It has also been shown that the proposed composite local symmetry achieves significantly higher accuracy than the traditional local intensity symmetry and the local intensity gradient symmetry. To the best of our knowledge, for delineation of deformed midline, this is the first report on both quantification of gliomas and from MRI, which hopefully will provide valuable information for diagnosis

  4. Modelling Pasture-based Automatic Milking System Herds: The Impact of Large Herd on Milk Yield and Economics

    PubMed Central

    Islam, M. R.; Clark, C. E. F.; Garcia, S. C.; Kerrisk, K. L.

    2015-01-01

    The aim of this modelling study was to investigate the effect of large herd size (and land areas) on walking distances and milking interval (MI), and their impact on milk yield and economic penalties when 50% of the total diets were provided from home grown feed either as pasture or grazeable complementary forage rotation (CFR) in an automatic milking system (AMS). Twelve scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as ‘moderate’; optimum pasture utilisation of 19.7 t DM/ha, termed as ‘high’) and 2 rates of incorporation of grazeable complementary forage system (CFS: 0, 30%; CFS = 65% farm is CFR and 35% of farm is pasture) were investigated. Walking distances, energy loss due to walking, MI, reduction in milk yield and income loss were calculated for each treatment based on information available in the literature. With moderate pasture utilisation and 0% CFR, increasing the herd size from 400 to 800 cows resulted in an increase in total walking distances between the parlour and the paddock from 3.5 to 6.3 km. Consequently, MI increased from 15.2 to 16.4 h with increased herd size from 400 to 800 cows. High pasture utilisation (allowing for an increased stocking density) reduced the total walking distances up to 1 km, thus reduced the MI by up to 0.5 h compared to the moderate pasture, 800 cow herd combination. The high pasture utilisation combined with 30% of the farm in CFR in the farm reduced the total walking distances by up to 1.7 km and MI by up to 0.8 h compared to the moderate pasture and 800 cow herd combination. For moderate pasture utilisation, increasing the herd size from 400 to 800 cows resulted in more dramatic milk yield penalty as yield increasing from c.f. 2.6 and 5.1 kg/cow/d respectively, which incurred a loss of up to $AU 1.9/cow/d. Milk yield losses of 0.61 kg and 0.25 kg for every km increase in total walking distance (voluntary

  5. Frequency and damping ratio assessment of high-rise buildings using an Automatic Model-Based Approach applied to real-world ambient vibration recordings

    NASA Astrophysics Data System (ADS)

    Nasser, Fatima; Li, Zhongyang; Gueguen, Philippe; Martin, Nadine

    2016-06-01

    This paper deals with the application of the Automatic Model-Based Approach (AMBA) over actual buildings subjected to real-world ambient vibrations. In a previous paper, AMBA was developed with the aim of automating the estimation process of the modal parameters and minimizing the estimation error, especially that of the damping ratio. It is applicable over a single-channel record, has no parameters to be set, and no manual initialization phase. The results presented in this paper should be regarded as further documentation of the approach over real-world ambient vibration signals.

  6. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved. PMID:20329520

  7. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  8. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    PubMed

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  9. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  10. Roads Centre-Axis Extraction in Airborne SAR Images: AN Approach Based on Active Contour Model with the Use of Semi-Automatic Seeding

    NASA Astrophysics Data System (ADS)

    Lotte, R. G.; Sant'Anna, S. J. S.; Almeida, C. M.

    2013-05-01

    Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar) imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR) airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM), followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes). The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  11. Feature based volume decomposition for automatic hexahedral mesh generation

    SciTech Connect

    LU,YONG; GADH,RAJIT; TAUTGES,TIMOTHY J.

    2000-02-21

    Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

  12. Three-dimensional electromagnetic model-based scattering center matching method for synthetic aperture radar automatic target recognition by combining spatial and attributed information

    NASA Astrophysics Data System (ADS)

    Ma, Conghui; Wen, Gongjian; Ding, Boyuan; Zhong, JinRong; Yang, Xiaoliang

    2016-01-01

    A three-dimensional electromagnetic model (3-D EM-model)-based scattering center matching method is developed for synthetic aperture radar automatic target recognition (ATR). 3-D EM-model provides a concise and physically relevant description of the target's electromagnetic scattering phenomenon through its scattering centers which makes it an ideal candidate for ATR. In our method, scatters of the 3-D EM-model are projected to the two-dimensional measurement plane to predict scatters' location and scattering intensity properties. Then the identical information is extracted for scatters in measured data. A two-stage iterative operation is applied to match the model-predicted scatters and the measured data-extracted scatters by combining spatial and attributed information. Based on the two scatter sets' matching information, a similarity measurement between model and measured data is obtained and recognition conclusion is made. Meanwhile, the target's configuration is reasoned with 3-D EM-model serving as a reference. In the end, data simulated by electromagnetic computation verified this method's validity.

  13. FieldChopper, a new tool for automatic model generation and virtual screening based on molecular fields.

    PubMed

    Kalliokoski, Tuomo; Ronkko, Toni; Poso, Antti

    2008-06-01

    Algorithms were developed for ligand-based virtual screening of molecular databases. FieldChopper (FC) is based on the discretization of the electrostatic and van der Waals field into three classes. A model is built from a set of superimposed active molecules. The similarity of the compounds in the database to the model is then calculated using matrices that define scores for comparing field values of different categories. The method was validated using 12 publicly available data sets by comparing the method to the electrostatic similarity comparison program EON. The results suggest that FC is competitive with more complex descriptors and could be used as a molecular sieve in virtual screening experiments when multiple active ligands are known. PMID:18489083

  14. Automatic mathematical modeling for real time simulation system

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1988-01-01

    A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.

  15. Nonlinear spectro-temporal features based on a cochlear model for automatic speech recognition in a noisy situation.

    PubMed

    Choi, Yong-Sun; Lee, Soo-Young

    2013-09-01

    A nonlinear speech feature extraction algorithm was developed by modeling human cochlear functions, and demonstrated as a noise-robust front-end for speech recognition systems. The algorithm was based on a model of the Organ of Corti in the human cochlea with such features as such as basilar membrane (BM), outer hair cells (OHCs), and inner hair cells (IHCs). Frequency-dependent nonlinear compression and amplification of OHCs were modeled by lateral inhibition to enhance spectral contrasts. In particular, the compression coefficients had frequency dependency based on the psychoacoustic evidence. Spectral subtraction and temporal adaptation were applied in the time-frame domain. With long-term and short-term adaptation characteristics, these factors remove stationary or slowly varying components and amplify the temporal changes such as onset or offset. The proposed features were evaluated with a noisy speech database and showed better performance than the baseline methods such as mel-frequency cepstral coefficients (MFCCs) and RASTA-PLP in unknown noisy conditions. PMID:23558292

  16. Vision-based industrial automatic vehicle classifier

    NASA Astrophysics Data System (ADS)

    Khanipov, Timur; Koptelov, Ivan; Grigoryev, Anton; Kuznetsova, Elena; Nikolaev, Dmitry

    2015-02-01

    The paper describes the automatic motor vehicle video stream based classification system. The system determines vehicle type at payment collection plazas on toll roads. Classification is performed in accordance with a preconfigured set of rules which determine type by number of wheel axles, vehicle length, height over the first axle and full height. These characteristics are calculated using various computer vision algorithms: contour detectors, correlational analysis, fast Hough transform, Viola-Jones detectors, connected components analysis, elliptic shapes detectors and others. Input data contains video streams and induction loop signals. Output signals are vehicle enter and exit events, vehicle type, motion direction, speed and the above mentioned features.

  17. Automatic identification of fault surfaces through Object Based Image Analysis of a Digital Elevation Model in the submarine area of the North Aegean Basin

    NASA Astrophysics Data System (ADS)

    Argyropoulou, Evangelia

    2015-04-01

    The current study was focused on the seafloor morphology of the North Aegean Basin in Greece, through Object Based Image Analysis (OBIA) using a Digital Elevation Model. The goal was the automatic extraction of morphologic and morphotectonic features, resulting into fault surface extraction. An Object Based Image Analysis approach was developed based on the bathymetric data and the extracted features, based on morphological criteria, were compared with the corresponding landforms derived through tectonic analysis. A digital elevation model of 150 meters spatial resolution was used. At first, slope, profile curvature, and percentile were extracted from this bathymetry grid. The OBIA approach was developed within the eCognition environment. Four segmentation levels were created having as a target "level 4". At level 4, the final classes of geomorphological features were classified: discontinuities, fault-like features and fault surfaces. On previous levels, additional landforms were also classified, such as continental platform and continental slope. The results of the developed approach were evaluated by two methods. At first, classification stability measures were computed within eCognition. Then, qualitative and quantitative comparison of the results took place with a reference tectonic map which has been created manually based on the analysis of seismic profiles. The results of this comparison were satisfactory, a fact which determines the correctness of the developed OBIA approach.

  18. Using automatic programming for simulating reliability network models

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  19. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific

  20. Application of automatic differentiation to reservoir design models.

    SciTech Connect

    Sinha, A. K.; Bischof, C. H.; Shiriaev, D.; Mathematics and Computer Science; Indian Inst. of Tech.; Technical Univ. of Dresden

    1998-05-01

    Automatic differentiation is a technique for computing derivatives accurately and efficiently with minimal human effort. The calculation of derivatives of numerical models is necessary for gradient-based optimization of reservoir systems to determine optimal sizes for reservoirs. The writers report on the use of automatic differentiation and divided difference approaches for computing derivatives for a single- and multiple-reservoir yield model. In the experiments, the ADIFOR (Automatic Differentiation of Fortran) tool is employed. The results show that, for both the single- and the multiple-reservoir model, automatic differentiation computes derivatives exactly and more efficiently than the divided difference implementation. Postoptimization of the ADIFOR-generated derivative code by exploiting the model structure is also discussed. The writers observe that the availability of exact derivatives significantly benefits the convergence of the optimization algorithm: the solution of the multireservoir problem, which took 10.5 hours with divided difference derivatives, is decreased to less than two hours with ADIFOR 'out of the box' derivatives, and to less than an hour using the postoptimized ADIFOR derivative code.

  1. Optimization of high-reliability-based hydrological design problems by robust automatic sampling of critical model realizations

    NASA Astrophysics Data System (ADS)

    Bayer, Peter; de Paly, Michael; Bürger, Claudius M.

    2010-05-01

    This study demonstrates the high efficiency of the so-called stack-ordering technique for optimizing a groundwater management problem under uncertain conditions. The uncertainty is expressed by multiple equally probable model representations, such as realizations of hydraulic conductivity. During optimization of a well-layout problem for contaminant control, a ranking mechanism is applied that extracts those realizations that appear most critical for the optimization problem. It is shown that this procedure works well for evolutionary optimization algorithms, which are to some extent robust against noisy objective functions. More precisely, differential evolution (DE) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are applied. Stack ordering is comprehensively investigated for a plume management problem at a hypothetical template site based on parameter values measured at and on a geostatistical model developed for the Lauswiesen study site near Tübingen, Germany. The straightforward procedure yields computational savings above 90% in comparison to always evaluating the full set of realizations. This is confirmed by cross testing with four additional validation cases. The results show that both evolutionary algorithms obtain highly reliable near-optimal solutions. DE appears to be the better choice for cases with significant noise caused by small stack sizes. On the other hand, there seems to be a problem-specific threshold for the evaluation stack size above which the CMA-ES achieves solutions with both better fitness and higher reliability.

  2. Digital movie-based on automatic titrations.

    PubMed

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%. PMID:26592600

  3. Automatic Parameters Identification of Groundwater Model using Expert System

    NASA Astrophysics Data System (ADS)

    Tsai, P. J.; Chen, Y.; Chang, L.

    2011-12-01

    Conventionally, parameters identification of groundwater model can be classified into manual parameters identification and automatic parameters identification using optimization method. Parameter searching in manual parameters identification requires heavily interaction with the modeler. Therefore, the identified parameters value is interpretable by the modeler. However, manual method is a complicated and time-consuming work and requires groundwater modeling practice and parameters identification experiences to performing the task. Optimization-based identification is more efficient and convenient comparing to the manual one. Nevertheless, the parameters search in the optimization approach can not directly interactive with modeler and one can only examine the final results. Moreover, because of the simplification of the optimization model, the parameters value obtained by optimization-based identification may not be feasible in reality. In light of previous discussion, this study integrates a rule-based expert system and a groundwater simulation model, MODFLOW 2000, to develop an automatic groundwater parameters identification system. The hydraulic conductivity and specific yield are the parameters to be calibrated in the system. Since the parameter value is automatic searched according the rules that are specified by modeler, it is efficient and the identified parameters value is more interpretable than that by optimized based approach. Beside, since the rules are easy to modify and adding, the system is flexible and can accumulate the expertise experiences. Several hypothesized cases were used to examine the system validity and capability. The result shows a good agreement between the identified and given parameter values and also demonstrates a great potential for extending the system to a fully function and practical field application system.

  4. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit

  5. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit

  6. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  7. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances.

    PubMed

    Islam, M R; Garcia, S C; Clark, C E F; Kerrisk, K L

    2015-06-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially 'concentrating' feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  8. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially ‘concentrating’ feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  9. Connecting Lines of Research on Task Model Variables, Automatic Item Generation, and Learning Progressions in Game-Based Assessment

    ERIC Educational Resources Information Center

    Graf, Edith Aurora

    2014-01-01

    In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…

  10. Automatic Speech Recognition Based on Electromyographic Biosignals

    NASA Astrophysics Data System (ADS)

    Jou, Szu-Chen Stan; Schultz, Tanja

    This paper presents our studies of automatic speech recognition based on electromyographic biosignals captured from the articulatory muscles in the face using surface electrodes. We develop a phone-based speech recognizer and describe how the performance of this recognizer improves by carefully designing and tailoring the extraction of relevant speech feature toward electromyographic signals. Our experimental design includes the collection of audibly spoken speech simultaneously recorded as acoustic data using a close-speaking microphone and as electromyographic signals using electrodes. Our experiments indicate that electromyographic signals precede the acoustic signal by about 0.05-0.06 seconds. Furthermore, we introduce articulatory feature classifiers, which had recently shown to improved classical speech recognition significantly. We describe that the classification accuracy of articulatory features clearly benefits from the tailored feature extraction. Finally, these classifiers are integrated into the overall decoding framework applying a stream architecture. Our final system achieves a word error rate of 29.9% on a 100-word recognition task.

  11. Robust driver heartbeat estimation: A q-Hurst exponent based automatic sensor change with interactive multi-model EKF.

    PubMed

    Vrazic, Sacha

    2015-08-01

    Preventing car accidents by monitoring the driver's physiological parameters is of high importance. However, existing measurement methods are not robust to driver's body movements. In this paper, a system that estimates the heartbeat from the seat embedded piezoelectric sensors, and that is robust to strong body movements is presented. Multifractal q-Hurst exponents are used within a classifier to predict the most probable best sensor signal to be used in an Interactive Multi-Model Extended Kalman Filter pulsation estimation procedure. The car vibration noise is reduced using an autoregressive exogenous model to predict the noise on sensors. The performance of the proposed system was evaluated on real driving data up to 100 km/h and with slaloms at high speed. It is shown that this method improves by 36.7% the pulsation estimation under strong body movement compared to static sensor pulsation estimation and appears to provide reliable pulsation variability information for top-level analysis of drowsiness or other conditions. PMID:26736864

  12. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation.

    PubMed

    Mangado, Nerea; Ceresa, Mario; Duchateau, Nicolas; Kjer, Hans Martin; Vera, Sergio; Dejea Velardo, Hector; Mistrik, Pavel; Paulsen, Rasmus R; Fagertun, Jens; Noailly, Jérôme; Piella, Gemma; González Ballester, Miguel Ángel

    2016-08-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations was obtained, in an average time of 94 s. The framework has proven to be fast and robust, and is promising for a detailed prognosis of the cochlear implantation surgery. PMID:26715210

  13. An Automatic Learning-Based Framework for Robust Nucleus Segmentation.

    PubMed

    Xing, Fuyong; Xie, Yuanpu; Yang, Lin

    2016-02-01

    Computer-aided image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of diseases such as brain tumor, pancreatic neuroendocrine tumor (NET), and breast cancer. Automated nucleus segmentation is a prerequisite for various quantitative analyses including automatic morphological feature computation. However, it remains to be a challenging problem due to the complex nature of histopathology images. In this paper, we propose a learning-based framework for robust and automatic nucleus segmentation with shape preservation. Given a nucleus image, it begins with a deep convolutional neural network (CNN) model to generate a probability map, on which an iterative region merging approach is performed for shape initializations. Next, a novel segmentation algorithm is exploited to separate individual nuclei combining a robust selection-based sparse shape model and a local repulsive deformable model. One of the significant benefits of the proposed framework is that it is applicable to different staining histopathology images. Due to the feature learning characteristic of the deep CNN and the high level shape prior modeling, the proposed method is general enough to perform well across multiple scenarios. We have tested the proposed algorithm on three large-scale pathology image datasets using a range of different tissue and stain preparations, and the comparative experiments with recent state of the arts demonstrate the superior performance of the proposed approach. PMID:26415167

  14. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  15. Thesaurus-Based Automatic Book Indexing.

    ERIC Educational Resources Information Center

    Dillon, Martin

    1982-01-01

    Describes technique for automatic book indexing requiring dictionary of terms with text strings that count as instances of term and text in form suitable for processing by text formatter. Results of experimental application to portion of book text are presented, including measures of precision and recall. Ten references are noted. (EJS)

  16. A comparison of texture models for automatic liver segmentation

    NASA Astrophysics Data System (ADS)

    Pham, Mailan; Susomboon, Ruchaneewan; Disney, Tim; Raicu, Daniela; Furst, Jacob

    2007-03-01

    Automatic liver segmentation from abdominal computed tomography (CT) images based on gray levels or shape alone is difficult because of the overlap in gray-level ranges and the variation in position and shape of the soft tissues. To address these issues, we propose an automatic liver segmentation method that utilizes low-level features based on texture information; this texture information is expected to be homogenous and consistent across multiple slices for the same organ. Our proposed approach consists of the following steps: first, we perform pixel-level texture extraction; second, we generate liver probability images using a binary classification approach; third, we apply a split-and-merge algorithm to detect the seed set with the highest probability area; and fourth, we apply to the seed set a region growing algorithm iteratively to refine the liver's boundary and get the final segmentation results. Furthermore, we compare the segmentation results from three different texture extraction methods (Co-occurrence Matrices, Gabor filters, and Markov Random Fields (MRF)) to find the texture method that generates the best liver segmentation. From our experimental results, we found that the co-occurrence model led to the best segmentation, while the Gabor model led to the worst liver segmentation. Moreover, co-occurrence texture features alone produced approximately the same segmentation results as those produced when all the texture features from the combined co-occurrence, Gabor, and MRF models were used. Therefore, in addition to providing an automatic model for liver segmentation, we also conclude that Haralick cooccurrence texture features are the most significant texture characteristics in distinguishing the liver tissue in CT scans.

  17. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  18. Geometrical and topological issues in octree based automatic meshing

    NASA Technical Reports Server (NTRS)

    Saxena, Mukul; Perucchio, Renato

    1987-01-01

    Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is discussed. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary representation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractor. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.

  19. Automatic food intake detection based on swallowing sounds

    PubMed Central

    Makeyev, Oleksandr; Lopez-Meyer, Paulo; Schuckers, Stephanie; Besio, Walter; Sazonov, Edward

    2012-01-01

    This paper presents a novel fully automatic food intake detection methodology, an important step toward objective monitoring of ingestive behavior. The aim of such monitoring is to improve our understanding of eating behaviors associated with obesity and eating disorders. The proposed methodology consists of two stages. First, acoustic detection of swallowing instances based on mel-scale Fourier spectrum features and classification using support vector machines is performed. Principal component analysis and a smoothing algorithm are used to improve swallowing detection accuracy. Second, the frequency of swallowing is used as a predictor for detection of food intake episodes. The proposed methodology was tested on data collected from 12 subjects with various degrees of adiposity. Average accuracies of >80% and >75% were obtained for intra-subject and inter-subject models correspondingly with a temporal resolution of 30s. Results obtained on 44.1 hours of data with a total of 7305 swallows show that detection accuracies are comparable for obese and lean subjects. They also suggest feasibility of food intake detection based on swallowing sounds and potential of the proposed methodology for automatic monitoring of ingestive behavior. Based on a wearable non-invasive acoustic sensor the proposed methodology may potentially be used in free-living conditions. PMID:23125873

  20. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Astrophysics Data System (ADS)

    Morgan, Steve

    1992-09-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  1. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Technical Reports Server (NTRS)

    Morgan, Steve

    1992-01-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  2. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  3. Automatic Building Information Model Query Generation

    SciTech Connect

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  4. 11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND BUILT BY WES. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  5. Image analysis techniques associated with automatic data base generation.

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  6. Automatic reactor model synthesis with genetic programming.

    PubMed

    Dürrenmatt, David J; Gujer, Willi

    2012-01-01

    Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site. PMID:22277238

  7. A Robot Based Automatic Paint Inspection System

    NASA Astrophysics Data System (ADS)

    Atkinson, R. M.; Claridge, J. F.

    1988-06-01

    The final inspection of manufactured goods is a labour intensive activity. The use of human inspectors has a number of potential disadvantages; it can be expensive, the inspection standard applied is subjective and the inspection process can be slow compared with the production process. The use of automatic optical and electronic systems to perform the inspection task is now a growing practice but, in general, such systems have been applied to small components which are accurately presented. Recent advances in vision systems and robot control technology have made possible the installation of an automated paint inspection system at the Austin Rover Group's plant at Cowley, Oxford. The automatic inspection of painted car bodies is a particularly difficult problem, but one which has major benefits. The pass line of the car bodies is ill-determined, the surface to be inspected is of varying surface geometry and only a short time is available to inspect a large surface area. The benefits, however, are due to the consistent standard of inspection which should lead to lower levels of customer complaints and improved process feedback. The Austin Rover Group initiated the development of a system to fulfil this requirement. Three companies collaborated on the project; Austin Rover itself undertook the production line modifications required for body presentation, Sira Ltd developed the inspection cameras and signal processing system and Unimation (Europe) Ltd designed, supplied and programmed the robot system. Sira's development was supported by a grant from the Department of Trade and Industry.

  8. Image-based automatic recognition of larvae

    NASA Astrophysics Data System (ADS)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  9. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  10. Kernel for modular robot applications: Automatic modeling techniques

    SciTech Connect

    Chen, I.M.; Yeo, S.H.; Chen, G.; Yang, G.

    1999-02-01

    A modular robotic system consists of standardized joint and link units that an be assembled into various kinematic configurations for different types of tasks. For the control and simulation of such a system, manual derivation of the kinematic and dynamic models, as well as the error model for kinematic calibration, require tremendous effort, because the models constantly change as the robot geometry is altered after module reconfiguration. This paper presents a frame-work to facilitate the model-generation procedure for the control and simulation of the modular robot system. A graph technique, termed kinematic graphs and realized through assembly incidence matrices (AIM), is introduced to represent the module-assembly sequence and robot geometry. The kinematics and dynamics are formulated based on a local representation of the theory of lie groups and Lie algebras. The automatic model-generation procedure starts with a given assembly graph of the modular robot. Kinematic, dynamic, and error models of the robot are then established, based on the local representations and iterative graph-traversing algorithms. This approach can be applied to a modular robot with both serial and branch-type geometries, and arbitrary degrees of freedom. Furthermore, the AIM of the robot naturally leads to solving the task-oriented optimal configuration problem in modular robots. There is no need to maintain a huge library of robot models, and the footprint of the overall software system can be reduced.

  11. Size-based protocol optimization using automatic tube current modulation and automatic kV selection in computed tomography.

    PubMed

    MacDougall, Robert D; Kleinman, Patricia L; Callahan, Michael J

    2016-01-01

    Size-based diagnostic reference ranges (DRRs) for contrast-enhanced pediatric abdominal computed tomography (CT) have been published in order to establish practical upper and lower limits of CTDI, DLP, and SSDE. Based on these DRRs, guidelines for establishing size-based SSDE target levels from the SSDE of a standard adult by applying a linear correction factor have been published and provide a great reference for dose optimization initiatives. The necessary step of designing manufacturer-specific CT protocols to achieve established SSDE targets is the responsibility of the Qualified Medical Physicist. The task is straightforward if fixed-mA protocols are used, however, more difficult when automatic exposure control (AEC) and automatic kV selection are considered. In such cases, the physicist must deduce the operation of AEC algorithms from technical documentation or through testing, using a wide range of phantom sizes. Our study presents the results of such testing using anthropomorphic phantoms ranging in size from the newborn to the obese adult. The effect of each user-controlled parameter was modeled for a single-manufacturer AEC algorithm (Siemens CARE Dose4D) and automatic kV selection algorithm (Siemens CARE kV). Based on the results presented in this study, a process for designing mA-modulated, pediatric abdominal CT protocols that achieve user-defined SSDE and kV targets is described. PMID:26894344

  12. Incremental logistic regression for customizing automatic diagnostic models.

    PubMed

    Tortajada, Salvador; Robles, Montserrat; García-Gómez, Juan Miguel

    2015-01-01

    In the last decades, and following the new trends in medicine, statistical learning techniques have been used for developing automatic diagnostic models for aiding the clinical experts throughout the use of Clinical Decision Support Systems. The development of these models requires a large, representative amount of data, which is commonly obtained from one hospital or a group of hospitals after an expensive and time-consuming gathering, preprocess, and validation of cases. After the model development, it has to overcome an external validation that is often carried out in a different hospital or health center. The experience is that the models show underperformed expectations. Furthermore, patient data needs ethical approval and patient consent to send and store data. For these reasons, we introduce an incremental learning algorithm base on the Bayesian inference approach that may allow us to build an initial model with a smaller number of cases and update it incrementally when new data are collected or even perform a new calibration of a model from a different center by using a reduced number of cases. The performance of our algorithm is demonstrated by employing different benchmark datasets and a real brain tumor dataset; and we compare its performance to a previous incremental algorithm and a non-incremental Bayesian model, showing that the algorithm is independent of the data model, iterative, and has a good convergence. PMID:25417079

  13. Study on automatic testing network based on LXI

    NASA Astrophysics Data System (ADS)

    Hu, Qin; Xu, Xing

    2006-11-01

    LXI (LAN eXtensions for Instrumentation), which is an extension of the widely used Ethernet technology in the automatic testing field, is the next generation instrumental platform. LXI standard is based on the industry standard Ethernet technolog, using the standard PC interface as the primary communication bus between devices. It implements the IEEE802.3 standard and supports TCP/IP protocol. LXI takes the advantage of the ease of use of GPIB-based instruments, the high performance and compact size of VXI/PXI instruments, and the flexibility and high throughput of Ethernet all at the same time. The paper firstly introduces the specification of LXI standard. Then, an automatic testing network architecture which is based on LXI platform is proposed. The automatic testing network is composed of several sets of LXI-based instruments, which are connected via an Ethernet switch or router. The network is computer-centric, and all the LXI-based instruments in the network are configured and initialized in computer. The computer controls the data acquisition, and displays the data on the screen. The instruments are using Ethernet connection as I/O interface, and can be triggered over a wired trigger interface, over LAN or over IEEE 1588 Precision Time Protocol running over the LAN interface. A hybrid automatic testing network comprised of LXI compliant devices and legacy instruments including LAN instruments as well as GPIB, VXI and PXI products connected via internal or external adaptors is also discussed at the end of the paper.

  14. Fully automatic perceptual modeling of near regular textures

    NASA Astrophysics Data System (ADS)

    Menegaz, G.; Franceschetti, A.; Mecocci, A.

    2007-02-01

    Near regular textures feature a relatively high degree of regularity. They can be conveniently modeled by the combination of a suitable set of textons and a placement rule. The main issues in this respect are the selection of the minimum set of textons bringing the variability of the basic patterns; the identification and positioning of the generating lattice; and the modelization of the variability in both the texton structure and the deviation from periodicity of the lattice capturing the naturalness of the considered texture. In this contribution, we provide a fully automatic solution to both the analysis and the synthesis issues leading to the generation of textures samples that are perceptually indistinguishable from the original ones. The definition of an ad-hoc periodicity index allows to predict the suitability of the model for a given texture. The model is validated through psychovisual experiments providing the conditions for subjective equivalence among the original and synthetic textures, while allowing to determine the minimum number of textons to be used to meet such a requirement for a given texture class. This is of prime importance in model-based coding applications, as is the one we foresee, as it allows to minimize the amount of information to be transmitted to the receiver.

  15. An automatic image inpainting algorithm based on FCM.

    PubMed

    Liu, Jiansheng; Liu, Hui; Qiao, Shangping; Yue, Guangxue

    2014-01-01

    There are many existing image inpainting algorithms in which the repaired area should be manually determined by users. Aiming at this drawback of the traditional image inpainting algorithms, this paper proposes an automatic image inpainting algorithm which automatically identifies the repaired area by fuzzy C-mean (FCM) algorithm. FCM algorithm classifies the image pixels into a number of categories according to the similarity principle, making the similar pixels clustering into the same category as possible. According to the provided gray value of the pixels to be inpainted, we calculate the category whose distance is the nearest to the inpainting area and this category is to be inpainting area, and then the inpainting area is restored by the TV model to realize image automatic inpainting. PMID:24516358

  16. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  17. A Network of Automatic Control Web-Based Laboratories

    ERIC Educational Resources Information Center

    Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian

    2011-01-01

    This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…

  18. Application of automatic differentiation to groundwater transport models

    SciTech Connect

    Bischof, C.H.; Ross, A.A.; Whiffen, G.J.; Shoemaker, C.A.; Carle, A.

    1994-06-01

    Automatic differentiation (AD) is a technique for generating efficient and reliable derivative codes from computer programs with a minimum of human effort. Derivatives of model output with respect to input are obtained exactly. No intrinsic limits to program length or complexity exist for this procedure. Calculation of derivatives of complex numerical models is required in systems optimization, parameter identification, and systems identification. We report on our experiences with the ADIFOR (Automatic Differentiation of Fortran) tool on a two-dimensional groundwater flow and contaminant transport finite-element model, ISOQUAD, and a three-dimensional contaminant transport finite-element model, TLS3D. Derivative values and computational times for the automatic differentiation procedure axe compared with values obtained from the divided differences and handwritten analytic approaches. We found that the derivative codes generated by ADIFOR provided accurate derivatives and ran significantly faster than divided-differences approximations, typically in a tenth of the CPU time required for the imprecise divided-differences method for both codes. We also comment on the impact of automatic differentiation technology with respect to accelerating the transfer of general techniques developed for using water resource computer models, such as optimal design, sensitivity analysis, and inverse modeling problems to field problems.

  19. Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis

    NASA Astrophysics Data System (ADS)

    Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.

    2013-11-01

    Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.

  20. A Hybrid Model for Automatic Emotion Recognition in Suicide Notes

    PubMed Central

    Yang, Hui; Willis, Alistair; de Roeck, Anne; Nuseibeh, Bashar

    2012-01-01

    We describe the Open University team’s submission to the 2011 i2b2/VA/Cincinnati Medical Natural Language Processing Challenge, Track 2 Shared Task for sentiment analysis in suicide notes. This Shared Task focused on the development of automatic systems that identify, at the sentence level, affective text of 15 specific emotions from suicide notes. We propose a hybrid model that incorporates a number of natural language processing techniques, including lexicon-based keyword spotting, CRF-based emotion cue identification, and machine learning-based emotion classification. The results generated by different techniques are integrated using different vote-based merging strategies. The automated system performed well against the manually-annotated gold standard, and achieved encouraging results with a micro-averaged F-measure score of 61.39% in textual emotion recognition, which was ranked 1st place out of 24 participant teams in this challenge. The results demonstrate that effective emotion recognition by an automated system is possible when a large annotated corpus is available. PMID:22879757

  1. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor

    ERIC Educational Resources Information Center

    Rus, Vasile; Lintean, Mihai; Azevedo, Roger

    2009-01-01

    This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…

  2. Towards automatic calibration of 2-dimensional flood propagation models

    NASA Astrophysics Data System (ADS)

    Fabio, P.; Aronica, G. T.; Apel, H.

    2009-11-01

    Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a) it is possible to use automatic calibration in combination of 2-D hydraulic model, and b) equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.

  3. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  4. Automatic active model initialization via Poisson inverse gradient.

    PubMed

    Li, Bing; Acton, Scott T

    2008-08-01

    Active models have been widely used in image processing applications. A crucial stage that affects the ultimate active model performance is initialization. This paper proposes a novel automatic initialization approach for parametric active models in both 2-D and 3-D. The PIG initialization method exploits a novel technique that essentially estimates the external energy field from the external force field and determines the most likely initial segmentation. Examples and comparisons with two state-of-the- art automatic initialization methods are presented to illustrate the advantages of this innovation, including the ability to choose the number of active models deployed, rapid convergence, accommodation of broken edges, superior noise robustness, and segmentation accuracy. PMID:18632349

  5. Edge density based automatic detection of inflammation in colonoscopy videos.

    PubMed

    Ševo, I; Avramović, A; Balasingham, I; Elle, O J; Bergsland, J; Aabakken, L

    2016-05-01

    Colon cancer is one of the deadliest diseases where early detection can prolong life and can increase the survival rates. The early stage disease is typically associated with polyps and mucosa inflammation. The often used diagnostic tools rely on high quality videos obtained from colonoscopy or capsule endoscope. The state-of-the-art image processing techniques of video analysis for automatic detection of anomalies use statistical and neural network methods. In this paper, we investigated a simple alternative model-based approach using texture analysis. The method can easily be implemented in parallel processing mode for real-time applications. A characteristic texture of inflamed tissue is used to distinguish between inflammatory and healthy tissues, where an appropriate filter kernel was proposed and implemented to efficiently detect this specific texture. The basic method is further improved to eliminate the effect of blood vessels present in the lower part of the descending colon. Both approaches of the proposed method were described in detail and tested in two different computer experiments. Our results show that the inflammatory region can be detected in real-time with an accuracy of over 84%. Furthermore, the experimental study showed that it is possible to detect certain segments of video frames containing inflammations with the detection accuracy above 90%. PMID:27043856

  6. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-01-01

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology. PMID:20375445

  7. A cloud-based system for automatic glaucoma screening.

    PubMed

    Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu

    2015-08-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management. PMID:26736579

  8. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  9. Failure prediction in automatically generated digital elevation models

    NASA Astrophysics Data System (ADS)

    Gooch, M. J.; Chandler, J. H.

    2001-10-01

    Developments in digital photogrammetry have provided the ability to generate digital elevation models (DEMs) automatically and are increasingly used by geoscientists. Using overlapping imagery, dense grids of digital elevations can be collected at high speeds (150 points per second) with a high level of accuracy. The trend towards using PC-based hardware, the widespread use of geographical information systems, and the forthcoming availability of high-resolution satellite imagery over the Internet at ever lower costs mean that the use of automated digital photogrammetry for elevation modelling is likely to become more widespread. Automation can reduce the need for an in-depth knowledge of the subject thus rendering the technology an option for more users. One criticism of the trend towards the automated "black box" approach is the lack of quality control procedures within the software, particularly with reference to identifying areas of the DEM with low accuracy. The traditional method of accuracy assessment is through the use of check point data (data collected by an independent method which has a higher level of accuracy against which the DEM can be compared). Check point data are, however, rarely available and it is typically recommended that the user manually check and edit the data using stereo viewing methods, a potentially lengthy process which can negate the obvious speed advantages brought about by automation. A data processing model has been developed that is capable of identifying areas where elevations are unreliable and to which the user should pay attention when editing and checking the data. The software model developed will be explained and described in detail in the paper. Results from tests on different scales of imagery, different types of imagery and other software packages will also be presented to demonstrate the efficacy and significantly the generality of the technique with other digital photogrammetric software systems.

  10. Neural network based algorithm for automatic identification of cough sounds.

    PubMed

    Swarnkar, V; Abeyratne, U R; Amrulloh, Yusuf; Hukins, Craig; Triasih, Rina; Setyati, Amalia

    2013-01-01

    Cough is the most common symptom of the several respiratory diseases containing diagnostic information. It is the best suitable candidate to develop a simplified screening technique for the management of respiratory diseases in timely manner, both in developing and developed countries, particularly in remote areas where medical facilities are limited. However, major issue hindering the development is the non-availability of reliable technique to automatically identify cough events. Medical practitioners still rely on manual counting, which is laborious and time consuming. In this paper we propose a novel method, based on the neural network to automatically identify cough segments, discarding other sounds such a speech, ambient noise etc. We achieved the accuracy of 98% in classifying 13395 segments into two classes, 'cough' and 'other sounds', with the sensitivity of 93.44% and specificity of 94.52%. Our preliminary results indicate that method can develop into a real-time cough identification technique in continuous cough monitoring systems. PMID:24110049

  11. Automatic learning-based beam angle selection for thoracic IMRT

    SciTech Connect

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G. Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  12. Automatic Feature-Based Grouping During Multiple Object Tracking

    PubMed Central

    Erlikhman, Gennady; Keane, Brian P.; Mettler, Everett; Horowitz, Todd S.; Kellman, Philip J.

    2013-01-01

    Contour interpolation automatically binds targets with distractors to impair multiple object tracking (Keane, Mettler, Tsoi, & Kellman, 2011). Is interpolation special in this regard, or can other features produce the same effect? To address this question, we examined the influence of eight features on tracking: color, contrast polarity, orientation, size, shape, depth, interpolation and a combination (shape, color, size). In each case, subjects tracked 4 of 8 objects that began as undifferentiated shapes, changed features as motion began (to enable grouping), and returned to their undifferentiated states before halting. The features were always irrelevant to the task instructions. We found that inter-target grouping improved performance for all feature types, except orientation and interpolation (Experiment 1 and Experiment 2). Most importantly, target-distractor grouping impaired performance for color, size, shape, combination, and interpolation. The impairments were at times large (>15% decrement in ac curacy) and occurred relative to a homogeneous condition in which all objects had the same features at each moment of a trial (Experiment 2) and relative to a “diversity” condition in which targets and distractors had different features at each moment (Experiment 3). We conclude that feature-based grouping occurs for a variety of features besides interpolation, even when irrelevant to task instructions and contrary to the task demands, suggesting that interpolation is not unique in promoting automatic grouping in tracking tasks. Our results also imply that various kinds of features are encoded automatically and in parallel during tracking. PMID:23458095

  13. A learning-based automatic spinal MRI segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Samarabandu, Jagath; Garvin, Greg; Chhem, Rethy; Li, Shuo

    2008-03-01

    Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances the clinical diagnosis. Although many algorithms have been proposed, it is still challenging to achieve an automatic clinical segmentation which requires speed and robustness. Automatically segmenting the vertebral column in Magnetic Resonance Imaging (MRI) image is extremely challenging as variations in soft tissue contrast and radio-frequency (RF) in-homogeneities cause image intensity variations. Moveover, little work has been done in this area. We proposed a generic slice-independent, learning-based method to automatically segment the vertebrae in spinal MRI images. A main feature of our contributions is that the proposed method is able to segment multiple images of different slices simultaneously. Our proposed method also has the potential to be imaging modality independent as it is not specific to a particular imaging modality. The proposed method consists of two stages: candidate generation and verification. The candidate generation stage is aimed at obtaining the segmentation through the energy minimization. In this stage, images are first partitioned into a number of image regions. Then, Support Vector Machines (SVM) is applied on those pre-partitioned image regions to obtain the class conditional distributions, which are then fed into an energy function and optimized with the graph-cut algorithm. The verification stage applies domain knowledge to verify the segmented candidates and reject unsuitable ones. Experimental results show that the proposed method is very efficient and robust with respect to image slices.

  14. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively. PMID:26978937

  15. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  16. Wind modeling and lateral control for automatic landing

    NASA Technical Reports Server (NTRS)

    Holley, W. E.; Bryson, A. E., Jr.

    1975-01-01

    For the purposes of aircraft control system design and analysis, the wind can be characterized by a mean component which varies with height and by turbulent components which are described by the von Karman correlation model. The aircraft aero-dynamic forces and moments depend linearly on uniform and gradient gust components obtained by averaging over the aircraft's length and span. The correlations of the averaged components are then approximated by the outputs of linear shaping filters forced by white noise. The resulting model of the crosswind shear and turbulence effects is used in the design of a lateral control system for the automatic landing of a DC-8 aircraft.

  17. Automatic extraction of relationships between concepts based on ontology

    NASA Astrophysics Data System (ADS)

    Yuan, Yifan; Du, Junping; Yang, Yuehua; Zhou, Jun; He, Pengcheng; Cao, Shouxin

    This paper applies Chinese word segmentation technology to the automatic extraction and description of the relationship between concepts. It takes text as corpus, matches the concept-pairs by rules and then describes the relationship between concepts in statistical methods. The paper implements an experiment based on the text in the field "respond to emergency", and optimizes speech tagging on account of experimental results, so that the relations extracted are more meaningful to emergency response. It analyzes the display order of inquiries and formulates rules of response and makes the results more meaningful. Consequently, the method turns out to be effective, and can be flexibly extended to other areas.

  18. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  19. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  20. Rule-based automatic segmentation for 3-D coronary arteriography

    NASA Astrophysics Data System (ADS)

    Sarwal, Alok; Truitt, Paul; Ozguner, Fusun; Zhang, Qian; Parker, Dennis L.

    1992-03-01

    Coronary arteriography is a technique used for evaluating the state of coronary arteries and assessing the need for bypass surgery and angioplasty. The present clinical application of this technology is based on the use of a contrast medium for manual radiographic visualization. This method is inaccurate due to varying interpretation of the visual results. Coronary arteriography based quantitations are impractical in a clinical setting without the use of automatic techniques applied to the 3-D reconstruction of the arterial tree. Such a system will provide an easily reproducible method for following the temporal changes in coronary morphology. The labeling of the arteries and establishing of the correspondence between multiple views is necessary for all subsequent processing required for 3-D reconstruction. This work represents a rule based expert system utilized for automatic labeling and segmentation of the arterial branches across multiple views. X-ray data of two and three views of human subjects and a pig arterial cast have been used for this research.

  1. Automatic data processing and crustal modeling on Brazilian Seismograph Network

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.; Chimpliganond, C.; Peres Rocha, M.; Franca, G.; Marotta, G. S.; Von Huelsen, M. G.

    2014-12-01

    The Brazilian Seismograph Network (RSBR) is a joint project of four Brazilian research institutions with the support of Petrobras and its main goal is to monitor the seismic activities, generate alerts of seismic hazard and provide data for Brazilian tectonic and structure research. Each institution operates and maintain their seismic network, sharing their data in an virtual private network. These networks have seismic stations transmitting in real time (or near real time) raw data to their respective data centers, where the seismogram files are then shared with other institutions. Currently RSBR has 57 broadband stations, some of them operating since 1994, transmitting data through mobile phone data networks or satellite links. Station management, data acquisition and storage and earthquake data processing at the Seismological Observatory of the University of Brasilia is automatically performed by SeisComP3 (SC3). However, the SC3 data processing is limited to event detection, location and magnitude. An automatic crustal modeling system was designed process raw seismograms and generate 1D S-velocity profiles. This system automatically calculates receiver function (RF) traces, Vp/Vs ratio (h-k stack) and surface waves dispersion (SWD) curves. These traces and curves are then used to calibrate the lithosphere seismic velocity models using a joint inversion scheme The results can be reviewed by an analyst, change processing parameters and selecting/neglecting RF traces and SWD curves used in lithosphere model calibration. The results to be obtained from this system will be used to generate and update a quasi-3D crustal model of Brazil's territory.

  2. Four-chamber heart modeling and automatic segmentation for 3D cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Georgescu, Bogdan; Barbu, Adrian; Scheuering, Michael; Comaniciu, Dorin

    2008-03-01

    Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This

  3. Automatic identification of model reductions for discrete stochastic simulation

    NASA Astrophysics Data System (ADS)

    Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda

    2012-07-01

    Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.

  4. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  5. Automatic feature template generation for maximum entropy based intonational phrase break prediction

    NASA Astrophysics Data System (ADS)

    Zhou, You

    2013-03-01

    The prediction of intonational phrase (IP) breaks is important for both the naturalness and intelligibility of Text-to- Speech (TTS) systems. In this paper, we propose a maximum entropy (ME) model to predict IP breaks from unrestricted text, and evaluate various keyword selection approaches in different domains. Furthermore, we design a hierarchical clustering algorithm for automatic generation of feature templates, which minimizes the need for human supervision during ME model training. Results of comparative experiments show that, for the task of IP break prediction, ME model obviously outperforms classification and regression tree (CART), log-likelihood ratio is the best scoring measure of keyword selection, compared with manual templates, templates automatically generated by our approach greatly improves the F-score of ME based IP break prediction, and significantly reduces the size of ME model.

  6. A triangulation-based approach to automatically repair GIS polygons

    NASA Astrophysics Data System (ADS)

    Ledoux, Hugo; Arroyo Ohori, Ken; Meijers, Martijn

    2014-05-01

    Although the validation of a single GIS polygon can be considered as a solved issue, the repair of an invalid polygon has not received much attention and is still in practice a semi-manual and time-consuming task. We investigate in this paper algorithms to automatically repair a single polygon. Automated repair algorithms can be considered as interpreting ambiguous or ill-defined polygons and returning a coherent and clearly defined output (the definition of the international standards in our case). We present a novel approach, based on the use of a constrained triangulation, to automatically repair invalid polygons. Our approach is conceptually simple and easy to implement as it is mostly based on labelling triangles. It is also flexible: it permits us to implement different repair paradigms (we describe two in the paper). We have implemented our algorithms, and we report on experiments made with large real-world polygons that are often used by practitioners in different disciplines. We show that our approach is faster and more scalable than alternative tools.

  7. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  8. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  9. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  10. Automatic Sulcal Curve Extraction with MRF Based Shape Prior

    PubMed Central

    Yang, Zhen; Carass, Aaron; Prince, Jerry. L.

    2016-01-01

    Extracting and labeling sulcal curves on the human cerebral cortex is important for many neuroscience studies, however manually annotating the sulcal curves is a time-consuming task. In this paper, we present an automatic sulcal curve extraction method by registering a set of dense landmark points representing the sulcal curves to the subject cortical surface. A Markov random field is used to model the prior distribution of these landmark points, with short edges in the graph preserving the curve structure and long edges modeling the global context of the curves. Our approach is validated using a leave-one-out strategy of training and evaluation on fifteen cortical surfaces, and a quantitative error analysis on the extracted major sulcal curves. PMID:27303593

  11. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis.

    PubMed

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  12. Automatic Tooth Segmentation of Dental Mesh Based on Harmonic Fields.

    PubMed

    Liao, Sheng-hui; Liu, Shi-jian; Zou, Bei-ji; Ding, Xi; Liang, Ye; Huang, Jun-hui

    2015-01-01

    An important preprocess in computer-aided orthodontics is to segment teeth from the dental models accurately, which should involve manual interactions as few as possible. But fully automatic partition of all teeth is not a trivial task, since teeth occur in different shapes and their arrangements vary substantially from one individual to another. The difficulty is exacerbated when severe teeth malocclusion and crowding problems occur, which is a common occurrence in clinical cases. Most published methods in this area either are inaccurate or require lots of manual interactions. Motivated by the state-of-the-art general mesh segmentation methods that adopted the theory of harmonic field to detect partition boundaries, this paper proposes a novel, dental-targeted segmentation framework for dental meshes. With a specially designed weighting scheme and a strategy of a priori knowledge to guide the assignment of harmonic constraints, this method can identify teeth partition boundaries effectively. Extensive experiments and quantitative analysis demonstrate that the proposed method is able to partition high-quality teeth automatically with robustness and efficiency. PMID:26413507

  13. Automatic identification of activity-rest periods based on actigraphy.

    PubMed

    Crespo, Cristina; Aboy, Mateo; Fernández, José Ramón; Mojón, Artemio

    2012-04-01

    We describe a novel algorithm for identification of activity/rest periods based on actigraphy signals designed to be used for a proper estimation of ambulatory blood pressure monitoring parameters. Automatic and accurate determination of activity/rest periods is critical in cardiovascular risk assessment applications including the evaluation of dipper versus non-dipper status. The algorithm is based on adaptive rank-order filters, rank-order decision logic, and morphological processing. The algorithm was validated on a database of 104 subjects including actigraphy signals for both the dominant and non-dominant hands (i.e., 208 actigraphy recordings). The algorithm achieved a mean performance above 94.0%, with an average number of 0.02 invalid transitions per 48 h. PMID:22382991

  14. Automatic recognition of landslides based on change detection

    NASA Astrophysics Data System (ADS)

    Li, Song; Hua, Houqiang

    2009-07-01

    After Wenchuan earthquake disaster, landslide disaster becomes a common concern, and remote sensing becomes more and more important in the application of landslide monitoring. Now, the method of interpretation and recognition for landslides using remote sensing is visual interpretation mostly. Automatic recognition of landslide is a new and difficult but significative job. For the purpose of seeking a more effective method to recognize landslide automatically, this project analyzes the current methods for the recognition of landslide disasters, and their applicability to the practice of landslide monitoring. Landslide is a phenomenon and disaster triggered by natural and artificial reasons that a part of slope comprised of rock, soil and other fragmental materials slide alone a certain weak structural surface under the gravitation. Consequently, according to the geo-science principle of landslide, there is an obvious change in the sliding region between the pre-landslide and post-landslide, and it can be described in remote sensing imagery, so we develop the new approach to identify landslides, which uses change detection based on texture analysis in multi-temporal imageries. Preprocessing the remote sensing data including the following aspects of image enhancement and filtering, smoothing and cutting, image mosaics, registration and merge, geometric correction and radiation calibration, this paper does change detection base on texture characteristics in multi-temporal images to recognize landslide automatically. After change detection of multi-temporal remote sensing images based on texture analysis, if there is no change in remote sensing image, the image detected is relatively homogeneous, the image detected shows some clustering characteristics; if there is part change in image, the image detected will show two or more clustering centers; if there is complete change in remote sensing image, the image detected will show disorderly and unsystematic. At last, this

  15. A semi-automatic model for sinkhole identification in a karst area of Zhijin County, China

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Oguchi, Takashi; Wu, Pan

    2015-12-01

    The objective of this study is to investigate the use of DEMs derived from ASTER and SRTM remote sensing images and topographic maps to detect and quantify natural sinkholes in a karst area in Zhijin county, southwest China. Two methodologies were implemented. The first is a semi-automatic approach which stepwise identifies the depression using DEMs: 1) DEM acquisition; 2) sink fill; 3) sink depth calculation using the difference between the original and sinkfree DEMs; and 4) elimination of the spurious sinkholes by the threshold values of morphometric parameters including TPI (topographic position index), geology, and land use. The second is the traditional visual interpretation of depressions based on the integrated analysis of the high-resolution aerial photographs and topographic maps. The threshold values of the depression area, shape, depth and TPI appropriate for distinguishing true depressions were abstained from the maximum overall accuracy generated by the comparison between the depression maps produced by the semi-automatic model or visual interpretation. The result shows that the best performance of the semi-automatic model for meso-scale karst depression delineation was using the DEM from the topographic maps with the thresholds area >~ 60 m2, ellipticity >~ 0.2 and TPI <= 0. With these realistic thresholds, the accuracy of the semi-automatic model ranges from 0.78 to 0.95 for DEM resolutions from 3 to 75 m.

  16. Automatic Dynamic Aircraft Modeler (ADAM) for the Computer Program NASTRAN

    NASA Technical Reports Server (NTRS)

    Griffis, H.

    1985-01-01

    Large general purpose finite element programs require users to develop large quantities of input data. General purpose pre-processors are used to decrease the effort required to develop structural models. Further reduction of effort can be achieved by specific application pre-processors. Automatic Dynamic Aircraft Modeler (ADAM) is one such application specific pre-processor. General purpose pre-processors use points, lines and surfaces to describe geometric shapes. Specifying that ADAM is used only for aircraft structures allows generic structural sections, wing boxes and bodies, to be pre-defined. Hence with only gross dimensions, thicknesses, material properties and pre-defined boundary conditions a complete model of an aircraft can be created.

  17. Automatic camera calibration method based on dashed lines

    NASA Astrophysics Data System (ADS)

    Li, Xiuhua; Wang, Guoyou; Liu, Jianguo

    2013-10-01

    We present a new method for full-automatic calibration of traffic cameras using the end points on dashed lines. Our approach uses the improved RANSAC method with the help of pixels transverse projection to detect the dashed lines and end points on them. Then combining analysis of the geometric relationship between the camera and road coordinate systems, we construct a road model to fit the end points. Finally using two-dimension calibration method we can convert pixels in image to meters along the ground truth lane. On a large number of experiments exhibiting a variety of conditions, our approach performs well, achieving less than 5% error in measuring test lengths in all cases.

  18. A CNN based Hybrid approach towards automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal V.; Katiyar, Sunil K.

    2013-06-01

    Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i

  19. Spike Detection Based on Normalized Correlation with Automatic Template Generation

    PubMed Central

    Hwang, Wen-Jyi; Wang, Szu-Huai; Hsu, Ya-Tzu

    2014-01-01

    A novel feedback-based spike detection algorithm for noisy spike trains is presented in this paper. It uses the information extracted from the results of spike classification for the enhancement of spike detection. The algorithm performs template matching for spike detection by a normalized correlator. The detected spikes are then sorted by the OSortalgorithm. The mean of spikes of each cluster produced by the OSort algorithm is used as the template of the normalized correlator for subsequent detection. The automatic generation and updating of templates enhance the robustness of the spike detection to input trains with various spike waveforms and noise levels. Experimental results show that the proposed algorithm operating in conjunction with OSort is an efficient design for attaining high detection and classification accuracy for spike sorting. PMID:24960082

  20. Modeling of a data exchange process in the Automatic Process Control System on the base of the universal SCADA-system

    NASA Astrophysics Data System (ADS)

    Topolskiy, D.; Topolskiy, N.; Solomin, E.; Topolskaya, I.

    2016-04-01

    In the present paper the authors discuss some ways of solving energy saving problems in mechanical engineering. In authors' opinion one of the ways of solving this problem is integrated modernization of power engineering objects of mechanical engineering companies, which should be intended for the energy supply control efficiency increase and electric energy commercial accounting improvement. The author have proposed the usage of digital current and voltage transformers for these purposes. To check the compliance of this equipment with the IEC 61850 International Standard, we have built a mathematic model of the data exchange process between measuring transformers and a universal SCADA-system. The results of modeling show that the discussed equipment corresponds to the mentioned Standard requirements and the usage of the universal SCADA-system for these purposes is preferable and economically reasonable. In modeling the authors have used the following software: MasterScada, Master OPC_DI_61850, OPNET.

  1. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  2. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  3. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  4. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  5. Automatic Test-Based Assessment of Programming: A Review

    ERIC Educational Resources Information Center

    Douce, Christopher; Livingstone, David; Orwell, James

    2005-01-01

    Systems that automatically assess student programming assignments have been designed and used for over forty years. Systems that objectively test and mark student programming work were developed simultaneously with programming assessment in the computer science curriculum. This article reviews a number of influential automatic assessment systems,…

  6. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  7. Automatic optic disc segmentation based on image brightness and contrast

    NASA Astrophysics Data System (ADS)

    Lu, Shijian; Liu, Jiang; Lim, Joo Hwee; Zhang, Zhuo; Tan, Ngan Meng; Wong, Wing Kee; Li, Huiqi; Wong, Tien Yin

    2010-03-01

    Untreated glaucoma leads to permanent damage of the optic nerve and resultant visual field loss, which can progress to blindness. As glaucoma often produces additional pathological cupping of the optic disc (OD), cupdisc- ratio is one measure that is widely used for glaucoma diagnosis. This paper presents an OD localization method that automatically segments the OD and so can be applied for the cup-disc-ratio based glaucoma diagnosis. The proposed OD segmentation method is based on the observations that the OD is normally much brighter and at the same time have a smoother texture characteristics compared with other regions within retinal images. Given a retinal image we first capture the ODs smooth texture characteristic by a contrast image that is constructed based on the local maximum and minimum pixel lightness within a small neighborhood window. The centre of the OD can then be determined according to the density of the candidate OD pixels that are detected by retinal image pixels of the lowest contrast. After that, an OD region is approximately determined by a pair of morphological operations and the OD boundary is finally determined by an ellipse that is fitted by the convex hull of the detected OD region. Experiments over 71 retinal images of different qualities show that the OD region overlapping reaches up to 90.37% according to the OD boundary ellipses determined by our proposed method and the one manually plotted by an ophthalmologist.

  8. Automatic Construction of Anomaly Detectors from Graphical Models

    SciTech Connect

    Ferragut, Erik M; Darmon, David M; Shue, Craig A; Kelley, Stephen

    2011-01-01

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.

  9. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  10. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  11. The Carrying Capacity Under Four-Aspect Color Light Automatic Block Signaling Based on Cellular Automata

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Qian, Yong-Sheng; Guang, Xiao-Ping; Zeng, Jun-Wei; Jia, Zhi-Long; Wang, Xin

    2013-05-01

    With the application of the dynamic control system, Cellular Automata model has become a valued tool for the simulation of human behavior and traffic flow. As an integrated kind of railway signal-control pattern, the four-aspect color light automatic block signaling has accounted for 50% in the signal-control system in China. Thus, it is extremely important to calculate correctly its carrying capacity under the automatic block signaling. Based on this fact the paper proposes a new kind of "cellular automata model" for the four-aspect color light automatic block signaling under different speed states. It also presents rational rules for the express trains with higher speed overtaking trains with lower speed in a same or adjacent section and the departing rules in some intermediate stations. In it, the state of mixed-speed trains running in the section composed of many stations is simulated with CA model, and the train-running diagram is acquired accordingly. After analyzing the relevant simulation results, the needed data are achieved herewith for the variation of section carrying capacity, the average train delay, the train speed with the change of mixed proportion, as well as the distance between the adjacent stations.

  12. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    SciTech Connect

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  13. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing.

    PubMed

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users' smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users' explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  14. Automatic classification for pathological prostate images based on fractal analysis.

    PubMed

    Huang, Po-Whei; Lee, Cheng-Hsiung

    2009-07-01

    Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images. PMID:19164082

  15. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  16. Automatic tumor segmentation using knowledge-based techniques.

    PubMed

    Clark, M C; Hall, L O; Goldgof, D B; Velthuizen, R; Murtagh, F R; Silbiger, M S

    1998-04-01

    A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRI's) of the human brain is presented. The MRI's consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled "ground truth" tumor volumes and supervised k-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time. PMID:9688151

  17. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline

    PubMed Central

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  18. Automatic target validation based on neuroscientific literature mining for tractography

    PubMed Central

    Vasques, Xavier; Richardet, Renaud; Hill, Sean L.; Slater, David; Chappelier, Jean-Cedric; Pralong, Etienne; Bloch, Jocelyne; Draganski, Bogdan; Cif, Laura

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/. PMID

  19. Automatic Multi-Scale Calibration Procedure for Nested Hydrological-Hydrogeological Regional Models

    NASA Astrophysics Data System (ADS)

    Labarthe, B.; Abasq, L.; Flipo, N.; de Fouquet, C. D.

    2014-12-01

    Large hydrosystem modelling and understanding is a complex process depending on regional and local processes. A nested interface concept has been implemented in the hydrosystem modelling platform for a large alluvial plain model (300 km2) part of a 11000 km2 multi-layer aquifer system, included in the Seine basin (65000 km2, France). The platform couples hydrological and hydrogeological processes through four spatially distributed modules (Mass balance, Unsaturated Zone, River and Groundwater). An automatic multi-scale calibration procedure is proposed. Using different data sets from regional scale (117 gauging stations and 183 piezometers over the 65000 km2) to the intermediate scale(dense past piezometric snapshot), it permits the calibration and homogenization of model parameters over scales.The stepwise procedure starts with the optimisation of the water mass balance parameters at regional scale using a conceptual 7 parameters bucket model coupled with the inverse modelling tool PEST. The multi-objective function is derived from river discharges and their de-composition by hydrograph separation. The separation is performed at each gauging station using an automatic procedure based one Chapman filter. Then, the model is run at the regional scale to provide recharge estimate and regional fluxes to the groundwater local model. Another inversion method is then used to determine the local hydrodynamic parameters. This procedure used an initial kriged transmissivity field which is successively updated until the simulated hydraulic head distribution equals a reference one obtained by krigging. Then, the local parameters are upscaled to the regional model by renormalisation procedure.This multi-scale automatic calibration procedure enhances both the local and regional processes representation. Indeed, it permits a better description of local heterogeneities and of the associated processes which are transposed into the regional model, improving the overall performances

  20. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types. PMID:21775265

  1. Approach for the Semi-Automatic Verification of 3d Building Models

    NASA Astrophysics Data System (ADS)

    Helmholz, P.; Belton, D.; Moncrieff, S.

    2013-04-01

    In the field of spatial sciences, there are a large number of disciplines and techniques for capturing data to solve a variety of different tasks and problems for different applications. Examples include: traditional survey for boundary definitions, aerial imagery for building models, and laser scanning for heritage facades. These techniques have different attributes such as the number of dimensions, accuracy and precision, and the format of the data. However, because of the number of applications and jobs, often over time these data sets captured from different sensor platforms and for different purposes will overlap in some way. In most cases, while this data is archived, it is not used in future applications to value add to the data capture campaign of current projects. It is also the case that newly acquire data are often not used to combine and improve existing models and data integrity. The purpose of this paper is to discuss a methodology and infrastructure to automatically support this concept. That is, based on a job specification, to automatically query existing and newly acquired data based on temporal and spatial relations, and to automatically combine and generate the best solution. To this end, there are three main challenges to examine; change detection, thematic accuracy and data matching.

  2. Automatic Creation of Structural Models from Point Cloud Data: the Case of Masonry Structures

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; Conde-Carnero, B.; González-Jorge, H.; Arias, P.; Caamaño, J. C.

    2015-08-01

    One of the fields where 3D modelling has an important role is in the application of such 3D models to structural engineering purposes. The literature shows an intense activity on the conversion of 3D point cloud data to detailed structural models, which has special relevance in masonry structures where geometry plays a key role. In the work presented in this paper, color data (from Intensity attribute) is used to automatically segment masonry structures with the aim of isolating masonry blocks and defining interfaces in an automatic manner using a 2.5D approach. An algorithm for the automatic processing of laser scanning data based on an improved marker-controlled watershed segmentation was proposed and successful results were found. Geometric accuracy and resolution of point cloud are constrained by the scanning instruments, giving accuracy levels reaching a few millimetres in the case of static instruments and few centimetres in the case of mobile systems. In any case, the algorithm is not significantly sensitive to low quality images because acceptable segmentation results were found in cases where blocks could not be visually segmented.

  3. The MSP430-based control system for automatic ELISA tester

    NASA Astrophysics Data System (ADS)

    Zhao, Xinghua; Zhu, Lianqing; Dong, Mingli; Lin, Ting; Niu, Shouwei

    2006-11-01

    This paper introduces the scheme of a control system for a fully automatic ELISA (Enzyme-linked Immunosorbent Assay) tester. This tester is designed to realize the movement and positioning of the robotic arms and the pipettors and to complete the functions of pumping, reading, washing, incubating and so on. It is based on a MSP430 flash chip, a 16-bit MCU manufactured by TI Co, with very low power consumption and powerful functions. This chip is adopted in all devices of the workstation to run the controlling program, to store involved parameters and data, and to drive stepper motors. To the MCUs, motors, sensors, valves and fans are extended. A personal computer (PC) is employed to communicate with the instrument through an interface board. Relevant hardware circuits are provided. Two programs, one running in PC performs users' operation about assay options and results, the other running in MCU initiates the system and waits for commands to drive the mechanisms, are developed. Through various examinations, this control system is proved to be reliable, efficient and flexible.

  4. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  5. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  6. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    SciTech Connect

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  7. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA. PMID:21181572

  8. An Automatic Optical and SAR Image Registration Method Using Iterative Multi-Level and Refinement Model

    NASA Astrophysics Data System (ADS)

    Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.

    2016-06-01

    Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  9. Automatic building of a web-like structure based on thermoplastic adhesive.

    PubMed

    Leach, Derek; Wang, Liyu; Reusser, Dorothea; Iida, Fumiya

    2014-09-01

    Animals build structures to extend their control over certain aspects of the environment; e.g., orb-weaver spiders build webs to capture prey, etc. Inspired by this behaviour of animals, we attempt to develop robotics technology that allows a robot to automatically builds structures to help it accomplish certain tasks. In this paper we show automatic building of a web-like structure with a robot arm based on thermoplastic adhesive (TPA) material. The material properties of TPA, such as elasticity, adhesiveness, and low melting temperature, make it possible for a robot to form threads across an open space by an extrusion-drawing process and then combine several of these threads into a web-like structure. The problems addressed here are discovering which parameters determine the thickness of a thread and determining how web-like structures may be used for certain tasks. We first present a model for the extrusion and the drawing of TPA threads which also includes the temperature-dependent material properties. The model verification result shows that the increasing relative surface area of the TPA thread as it is drawn thinner increases the heat loss of the thread, and that by controlling how quickly the thread is drawn, a range of diameters can be achieved from 0.2-0.75 mm. We then present a method based on a generalized nonlinear finite element truss model. The model was validated and could predict the deformation of various web-like structures when payloads are added. At the end, we demonstrate automatic building of a web-like structure for payload bearing. PMID:24960453

  10. Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models

    PubMed Central

    Rojas Q., Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions. PMID:21858069

  11. Fully automatic vertebra detection in x-ray images based on multi-class SVM

    NASA Astrophysics Data System (ADS)

    Lecron, Fabian; Benjelloun, Mohammed; Mahmoudi, Saïd

    2012-02-01

    Automatically detecting vertebral bodies in X-Ray images is a very complex task, especially because of the noise and the low contrast resulting in that kind of medical imagery modality. Therefore, the contributions in the literature are mainly interested in only 2 medical imagery modalities: Computed Tomography (CT) and Magnetic Resonance (MR). Few works are dedicated to the conventional X-Ray radiography and propose mostly semi-automatic methods. However, vertebra detection is a key step in many medical applications such as vertebra segmentation, vertebral morphometry, etc. In this work, we develop a fully automatic approach for the vertebra detection, based on a learning method. The idea is to detect a vertebra by its anterior corners without human intervention. To this end, the points of interest in the radiograph are firstly detected by an edge polygonal approximation. Then, a SIFT descriptor is used to train an SVM-model. Therefore, each point of interest can be classified in order to detect if it belongs to a vertebra or not. Our approach has been assessed by the detection of 250 cervical vertebræ on radiographs. The results show a very high precision with a corner detection rate of 90.4% and a vertebra detection rate from 81.6% to 86.5%.

  12. Profiling School Shooters: Automatic Text-Based Analysis

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L.

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters’ texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  13. Profiling School Shooters: Automatic Text-Based Analysis.

    PubMed

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  14. Mindfulness-Based Parent Training: Strategies to Lessen the Grip of Automaticity in Families with Disruptive Children

    ERIC Educational Resources Information Center

    Dumas, Jean E.

    2005-01-01

    Disagreements and conflicts in families with disruptive children often reflect rigid patterns of behavior that have become overlearned and automatized with repeated practice. These patterns are mindless: They are performed with little or no awareness and are highly resistant to change. This article introduces a new, mindfulness-based model of…

  15. Automatic Model Selection for 3d Reconstruction of Buildings from Satellite Imagary

    NASA Astrophysics Data System (ADS)

    Partovi, T.; Arefi, H.; Krauß, T.; Reinartz, P.

    2013-09-01

    Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM) generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  16. A Model of Automatic Identification of Groundwater Parameters using an Expert System

    NASA Astrophysics Data System (ADS)

    Chang, P.; Chang, L.; Jung, C.; Huang, C.; Chen, J.; Tsai, P. J.; Chen, Y.; Wang, Y.

    2010-12-01

    Conventional methods for identification of groundwater parameters could be categorized into manual identification of parameters and automatic identification of parameters. Manual identification of parameters determines parameter values using a manual decision-making process. The manual identification process is flexible and is also understandable. However, the complete process is time-consuming and requires background knowledge of groundwater simulation. In contrast, automatic identification of parameters, which is traditionally, founded on optimization-based approaches, has a relatively greater degree of computational efficiency. The automatic method uses optimization formulas to represent the concepts of parameter identification and includes objective functions and constraints. However, because the formulas are complicated and abstract, application of this method to complicated field problems may be limited. Larger dimensions of parameters also imply an increased computational load when using the optimization method. This study used a rule-based expert system and a groundwater simulation model, MODFLOW 2000, to develop an automatic system for identification of groundwater parameters that retains the interpretability and flexibility of manual identification and the computational efficiency of automatic identification. With the expert system as the center of parameter modification, the proposed system can increase its capacity for identification by adding new rules. After empirical data on identification of groundwater parameters have been generalized and transformed into rules stored in the knowledge base, the expert system is preceded by rule inference generated by the inference engine. In contrast to traditional procedures, the expert system, due to the inference engine, is not sensitive to the order of the execution of rules. This advantage makes maintaining and expanding the knowledge base easier and more flexible. To demonstrate the accuracy and capacity of

  17. Automatic Recommendations for E-Learning Personalization Based on Web Usage Mining Techniques and Information Retrieval

    ERIC Educational Resources Information Center

    Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa

    2009-01-01

    In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…

  18. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  19. Microcomputer-based automatic regulation of extracorporeal circulation: a trial for the application of fuzzy inference.

    PubMed

    Anbe, J; Tobi, T; Nakajima, H; Akasaka, T; Okinaga, K

    1992-10-01

    Since its establishment many researchers have been trying to automate the process of extracorporeal circulation (ECC). We developed a preliminary experimental model of an automatic regulatory system for ECC. The purpose of the system was to regulate basic hemodynamic parameters such as pump flow and withdrawal blood volume. It was divided into three main components: data sampling unit, central processing unit, and controlling unit. Based on this model we were able to achieve autoregulation of ECC using minimum configuration; however, the system lacked smoothness. This was partly because it was based on a "static" regulation system which used conditional statements having multiple parameters. In this study, we applied fuzzy logic to the former model to achieve more accurate and reliable regulation. We report experimental results for the new system and compare the data between clinical circulation in 13 infants (mean body weight, 13.32 +/- 5.99 kg) and experimental regulation in 7 mongrel dogs (mean body weight, 11.9 +/- 2.53 kg). The comparative study revealed no statistical difference between the two groups. This result suggests that the automatic regulation of ECC may be an alternative to manual operation by a professional perfusionist in the near future. PMID:10078307

  20. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  1. ModelMage: a tool for automatic model generation, selection and management.

    PubMed

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  2. 3D automatic anatomy recognition based on iterative graph-cut-ASM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.

    2010-02-01

    We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.

  3. Model development for automatic guidance of a VTOL aircraft to a small aviation ship

    NASA Technical Reports Server (NTRS)

    Goka, T.; Sorensen, J. A.; Schmidt, S. F.; Paulk, C. H., Jr.

    1980-01-01

    This paper describes a detailed mathematical model which has been assembled to study automatic approach and landing guidance concepts to bring a VTOL aircraft onto a small aviation ship. The model is used to formulate system simulations which in turn are used to evaluate different guidance concepts. Ship motion (Sea State 5), wind-over-deck turbulence, MLS-based navigation, implicit model following flight control, lift fan V/STOL aircraft, ship and aircraft instrumentation errors, various steering laws, and appropriate environmental and human factor constraints are included in the model. Results are given to demonstrate use of the model and simulation to evaluate performance of the flight system and to choose appropriate guidance techniques for further cockpit simulator study.

  4. An effective automatic procedure for testing parameter identifiability of HIV/AIDS models.

    PubMed

    Saccomani, Maria Pia

    2011-08-01

    Realistic HIV models tend to be rather complex and many recent models proposed in the literature could not yet be analyzed by traditional identifiability testing techniques. In this paper, we check a priori global identifiability of some of these nonlinear HIV models taken from the recent literature, by using a differential algebra algorithm based on previous work of the author. The algorithm is implemented in a software tool, called DAISY (Differential Algebra for Identifiability of SYstems), which has been recently released (DAISY is freely available on the web site http://www.dei.unipd.it/~pia/ ). The software can be used to automatically check global identifiability of (linear and) nonlinear models described by polynomial or rational differential equations, thus providing a general and reliable tool to test global identifiability of several HIV models proposed in the literature. It can be used by researchers with a minimum of mathematical background. PMID:20953911

  5. Computer-based automatic finger- and speech-tracking system.

    PubMed

    Breidegard, Björn

    2007-11-01

    This article presents the first technology ever for online registration and interactive and automatic analysis of finger movements during tactile reading (Braille and tactile pictures). Interactive software has been developed for registration (with two cameras and a microphone), MPEG-2 video compression and storage on disk or DVD as well as an interactive analysis program to aid human analysis. An automatic finger-tracking system has been implemented which also semiautomatically tracks the reading aloud speech on the syllable level. This set of tools opens the way for large scale studies of blind people reading Braille or tactile images. It has been tested in a pilot project involving congenitally blind subjects reading texts and pictures. PMID:18183897

  6. Automatic script identification from images using cluster-based templates

    SciTech Connect

    Hochberg, J.; Kerns, L.; Kelly, P.; Thomas, T.

    1995-02-01

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

  7. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  8. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis.

    PubMed

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text "The North Wind and the Sun" were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  9. Study of burn scar extraction automatically based on level set method using remote sensing data.

    PubMed

    Liu, Yang; Dai, Qin; Liu, Jianbo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  10. Automatic vertebral identification using surface-based registration

    NASA Astrophysics Data System (ADS)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  11. Automatic Parallelization Using OpenMP Based on STL Semantics

    SciTech Connect

    Liao, C; Quinlan, D J; Willcock, J J; Panas, T

    2008-06-03

    Automatic parallelization of sequential applications using OpenMP as a target has been attracting significant attention recently because of the popularity of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high level abstractions such as STL containers are largely ignored due to the lack of research compilers that are readily able to recognize high level object-oriented abstractions of STL. In this paper, we use ROSE, a multiple-language source-to-source compiler infrastructure, to build a parallelizer that can recognize such high level semantics and parallelize C++ applications using certain STL containers. The idea of our work is to automatically insert OpenMP constructs using extended conventional dependence analysis and the known domain-specific semantics of high-level abstractions with optional assistance from source code annotations. In addition, the parallelizer is followed by an OpenMP translator to translate the generated OpenMP programs into multi-threaded code targeted to a popular OpenMP runtime library. Our work extends the applicability of automatic parallelization and provides another way to take advantage of multicore processors.

  12. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  13. Validating Automatically Generated Students' Conceptual Models from Free-text Answers at the Level of Concepts

    NASA Astrophysics Data System (ADS)

    Pérez-Marín, Diana; Pascual-Nieto, Ismael; Rodríguez, Pilar; Anguiano, Eloy; Alfonseca, Enrique

    2008-11-01

    Students' conceptual models can be defined as networks of interconnected concepts, in which a confidence-value (CV) is estimated per each concept. This CV indicates how confident the system is that each student knows the concept according to how the student has used it in the free-text answers provided to an automatic free-text scoring system. In a previous work, a preliminary validation was done based on the global comparison between the score achieved by each student in the final exam and the score associated to his or her model (calculated as the average of the CVs of the concepts). 50% Pearson correlation statistically significant (p = 0.01) was reached. In order to complete those results, in this paper, the level of granularity has been lowered down to each particular concept. In fact, the correspondence between the human estimation of how well each concept of the conceptual model is known versus the computer estimation is calculated. 0.08 mean quadratic error between both values has been attained, which validates the automatically generated students' conceptual models at the concept level of granularity.

  14. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    NASA Astrophysics Data System (ADS)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  15. An image-based automatic mesh generation and numerical simulation for a population-based analysis of aerosol delivery in the human lungs

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2013-11-01

    The authors propose a method to automatically generate three-dimensional subject-specific airway geometries and meshes for computational fluid dynamics (CFD) studies of aerosol delivery in the human lungs. The proposed method automatically expands computed tomography (CT)-based airway skeleton to generate the centerline (CL)-based model, and then fits it to the CT-segmented geometry to generate the hybrid CL-CT-based model. To produce a turbulent laryngeal jet known to affect aerosol transport, we developed a physiologically-consistent laryngeal model that can be attached to the trachea of the above models. We used Gmsh to automatically generate the mesh for the above models. To assess the quality of the models, we compared the regional aerosol distributions in a human lung predicted by the hybrid model and the manually generated CT-based model. The aerosol distribution predicted by the hybrid model was consistent with the prediction by the CT-based model. We applied the hybrid model to 8 healthy and 16 severe asthmatic subjects, and average geometric error was 3.8% of the branch radius. The proposed method can be potentially applied to the branch-by-branch analyses of a large population of healthy and diseased lungs. NIH Grants R01-HL-094315 and S10-RR-022421, CT data provided by SARP, and computer time provided by XSEDE.

  16. Electroporation-based treatment planning for deep-seated tumors based on automatic liver segmentation of MRI images.

    PubMed

    Pavliha, Denis; Mušič, Maja M; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required. PMID:23936315

  17. Design of underwater robot lines based on a hybrid automatic optimization strategy

    NASA Astrophysics Data System (ADS)

    Lyu, Wenjing; Luo, Weilin

    2014-09-01

    In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.

  18. Automatic recognition of piping system from laser scanned point clouds using normal-based region growing

    NASA Astrophysics Data System (ADS)

    Kawashima, K.; Kanai, S.; Date, H.

    2013-10-01

    In recent years, renovations of plant equipment have been more frequent, and constructing 3D as-built models of existing plants from large-scale laser scanned data is expected to make rebuilding processes more efficient. However, laser scanned data consists of enormous number of points, captures tangled objects and includes a high noise level, so that the manual reconstruction of a 3D model is very time-consuming. Among plant equipment, piping systems especially account for the greatest proportion. Therefore, the purpose of this research was to propose an algorithm which can automatically recognize a piping system from large-scale laser scanned data of plants. The straight portion of pipes, connecting parts and connection relationship of the piping system can be automatically recognized. Normal-based region growing enables the extraction of points on the piping system. Eigen analysis of the normal tensor and cylinder surface fitting allows the algorithm to recognize portions of straight pipes. Tracing the axes of the piping system and interpolation of the axes can derive connecting parts and connection relationships between elements of the piping system. The algorithm was applied to large-scale scanned data of an oil rig and a chemical plant. The recognition rate of straight pipes, elbows, junctions achieved 93%, 88% and 87% respectively.

  19. Template-based automatic extraction of the joint space of foot bones from CT scan

    NASA Astrophysics Data System (ADS)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  20. Exploiting vibration-based spectral signatures for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Crider, Lauren; Kangas, Scott

    2014-06-01

    Feature extraction algorithms for vehicle classification techniques represent a large branch of Automatic Target Recognition (ATR) efforts. Traditionally, vehicle ATR techniques have assumed time series vibration data collected from multiple accelerometers are a function of direct path, engine driven signal energy. If data, however, is highly dependent on measurement location these pre-established feature extraction algorithms are ineffective. In this paper, we examine the consequences of analyzing vibration data potentially contingent upon transfer path effects by exploring the sensitivity of sensor location. We summarize our analysis of spectral signatures from each accelerometer and investigate similarities within the data.

  1. Automatic event detection based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Doubravová, Jana; Wiszniowski, Jan; Horálek, Josef

    2015-04-01

    The proposed algorithm was developed to be used for Webnet, a local seismic network in West Bohemia. The Webnet network was built to monitor West Bohemia/Vogtland swarm area. During the earthquake swarms there is a large number of events which must be evaluated automatically to get a quick estimate of the current earthquake activity. Our focus is to get good automatic results prior to precise manual processing. With automatic data processing we may also reach a lower completeness magnitude. The first step of automatic seismic data processing is the detection of events. To get a good detection performance we require low number of false detections as well as high number of correctly detected events. We used a single layer recurrent neural network (SLRNN) trained by manual detections from swarms in West Bohemia in the past years. As inputs of the SLRNN we use STA/LTA of half-octave filter bank fed by vertical and horizontal components of seismograms. All stations were trained together to obtain the same network with the same neuron weights. We tried several architectures - different number of neurons - and different starting points for training. Networks giving the best results for training set must not be the optimal ones for unknown waveforms. Therefore we test each network on test set from different swarm (but still with similar characteristics, i.e. location, focal mechanisms, magnitude range). We also apply a coincidence verification for each event. It means that we can lower the number of false detections by rejecting events on one station only and force to declare an event on all stations in the network by coincidence on two or more stations. In further work we would like to retrain the network for each station individually so each station will have its own coefficients (neural weights) set. We would also like to apply this method to data from Reykjanet network located in Reykjanes peninsula, Iceland. As soon as we have a reliable detection, we can proceed to

  2. Automatic ultrasonic breast lesions detection using support vector machine based algorithm

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuang; Miao, Shan-Jung; Fan, Wei-Che; Chen, Yung-Sheng

    2007-03-01

    It is difficult to automatically detect tumors and extract lesion boundaries in ultrasound images due to the variance in shape, the interference from speckle noise, and the low contrast between objects and background. The enhancement of ultrasonic image becomes a significant task before performing lesion classification, which was usually done with manual delineation of the tumor boundaries in the previous works. In this study, a linear support vector machine (SVM) based algorithm is proposed for ultrasound breast image training and classification. Then a disk expansion algorithm is applied for automatically detecting lesions boundary. A set of sub-images including smooth and irregular boundaries in tumor objects and those in speckle-noised background are trained by the SVM algorithm to produce an optimal classification function. Based on this classification model, each pixel within an ultrasound image is classified into either object or background oriented pixel. This enhanced binary image can highlight the object and suppress the speckle noise; and it can be regarded as degraded paint character (DPC) image containing closure noise, which is well known in perceptual organization of psychology. An effective scheme of removing closure noise using iterative disk expansion method has been successfully demonstrated in our previous works. The boundary detection of ultrasonic breast lesions can be further equivalent to the removal of speckle noise. By applying the disk expansion method to the binary image, we can obtain a significant radius-based image where the radius for each pixel represents the corresponding disk covering the specific object information. Finally, a signal transmission process is used for searching the complete breast lesion region and thus the desired lesion boundary can be effectively and automatically determined. Our algorithm can be performed iteratively until all desired objects are detected. Simulations and clinical images were introduced to

  3. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  4. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  5. Galaxies and Genes: Towards an Automatic Modeling of Interacting Galaxies (Oral Contribution)

    NASA Astrophysics Data System (ADS)

    Theis, Christian; Gerds, Christoph; Spinneker, Christian

    The main problems in modeling interacting galaxies are the extended parameter space and the fairly high CPU costs of self-consistent N-body simulations. Therefore, traditional modeling techniques suffer from either extreme CPU demands or trapping in local optima (or both). A very promising alternative approach are evolutionary algorithms which mimic natural adaptation in order to optimize the numerical models. One main advantage is their very weak dependence on starting points which makes them much less prone to trapping in local optima. We present a Genetic Algorithm (GA) coupled with a fast (but not self-consistent) restricted N-body solver. This combination allows us to identify interesting regions of parameter space within only a few CPU hours on a standard PC or a few CPU minutes on a parallel computer. Especially, we demonstrate here the ability of GA-based fitting procedures to analyse observational data automatically, provided the data are sufficiently accurate.

  6. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  7. Renal Transplantation by Automatic Anastomotic Device in a Porcine Model.

    PubMed

    Lo Monte, Attilio Ignazio; Damiano, Giuseppe; Palumbo, Vincenzo Davide; Spinelli, Gabriele; Buscemi, Giuseppe

    2015-10-01

    Automatic vascular staplers for vascular anastomoses in kidney transplantation may dramatically reduce the operative time and, in particular, warm ischemia time, thus increasing the outcome of transplantation. Ten pigs underwent kidney auto-transplantation by automatic anastomotic device. Kidneys were collected by laparotomy with selective ligations at the renal hilum and perfused with cold storage solution. To overcome the shortage in length of renal hilum, a tract of the internal jugular vein was harvested to increase the length of the vessels. The anastomoses were totally performed by the use of the anastomotic device. On 10 kidney transplants, nine were successful and no complications occurred. Renal resistive indexes showed a slight increase in the immediate postoperative period returning normal at 10 days of follow-up. We demonstrated the possibility to perform renal vascular anastomoses by means of an automatic anastomotic device. This instrument developed for coronary bypass surgery by virtue of the small caliber of the vessels could be adopted on a larger scale for renal transplantation. The reduced warm ischemia time needed for anastomosis may help to achieve a better outcome for the graft and expand the pool of marginal donors in renal transplantation. PMID:25900063

  8. An automatic and overlap based method for LiDAR intensity correction

    NASA Astrophysics Data System (ADS)

    Ding, Qiong

    2016-03-01

    LiDAR provides intensity data that reflect the material characteristics of objects. However, intensity values need to be corrected before they can be reliably used for applications because of the error during data acquisition. This study proposed an automatic and overlap based method for intensity correction. Firstly, a radar equation based method was employed for removal of main errors. Then, nearest neighbor algorithm was used to find out homologous points of overlap regions and assumption was made that homologous points should have same intensity. Finally, an improved model was utilized to eliminate overlap discrepancies. This method can be considered as a potential aid to enhance the accuracy of LiDAR intensity data and improve the automation of data process.

  9. Wireless Sensor Network-Based Greenhouse Environment Monitoring and Automatic Control System for Dew Condensation Prevention

    PubMed Central

    Park, Dae-Heon; Park, Jang-Woo

    2011-01-01

    Dew condensation on the leaf surface of greenhouse crops can promote diseases caused by fungus and bacteria, affecting the growth of the crops. In this paper, we present a WSN (Wireless Sensor Network)-based automatic monitoring system to prevent dew condensation in a greenhouse environment. The system is composed of sensor nodes for collecting data, base nodes for processing collected data, relay nodes for driving devices for adjusting the environment inside greenhouse and an environment server for data storage and processing. Using the Barenbrug formula for calculating the dew point on the leaves, this system is realized to prevent dew condensation phenomena on the crop’s surface acting as an important element for prevention of diseases infections. We also constructed a physical model resembling the typical greenhouse in order to verify the performance of our system with regard to dew condensation control. PMID:22163813

  10. A SIFT feature based registration algorithm in automatic seal verification

    NASA Astrophysics Data System (ADS)

    He, Jin; Ding, Xuewen; Zhang, Hao; Liu, Tiegen

    2012-11-01

    A SIFT (Scale Invariant Feature Transform) feature based registration algorithm is presented to prepare for the seal verification, especially for the verification of high quality counterfeit sample seal. The similarities and the spatial relationships between the matched SIFT features are combined for the registration. SIFT features extracted from the binary model seal and sample seal images are matched according to their similarities. The matching rate is used to define the similar sample seal that is similar with its model seal. For the similar sample seal, the false matches are eliminated according to the position relationship. Then the homography between model seal and sample seal is constructed and named HS . The theoretical homography is namedH . The accuracy of registration is evaluated by the Frobenius norm of H-HS . In experiments, translation, filling and rotation transformations are applied to seals with different shapes, stroke number and structures. After registering the transformed seals and their model seals, the maximum value of the Frobenius norm of their H-HS is not more than 0.03. The results prove that this algorithm can accomplish accurate registration, which is invariant to translation, filling, and rotation transformation, and there is no limit to the seal shapes, stroke number and structures.

  11. Evaluation of Automatic Atlas-Based Lymph Node Segmentation for Head-and-Neck Cancer

    SciTech Connect

    Stapleford, Liza J.; Lawson, Joshua D.; Perkins, Charles; Edelman, Scott; Davis, Lawrence

    2010-07-01

    Purpose: To evaluate if automatic atlas-based lymph node segmentation (LNS) improves efficiency and decreases inter-observer variability while maintaining accuracy. Methods and Materials: Five physicians with head-and-neck IMRT experience used computed tomography (CT) data from 5 patients to create bilateral neck clinical target volumes covering specified nodal levels. A second contour set was automatically generated using a commercially available atlas. Physicians modified the automatic contours to make them acceptable for treatment planning. To assess contour variability, the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was used to take collections of contours and calculate a probabilistic estimate of the 'true' segmentation. Differences between the manual, automatic, and automatic-modified (AM) contours were analyzed using multiple metrics. Results: Compared with the 'true' segmentation created from manual contours, the automatic contours had a high degree of accuracy, with sensitivity, Dice similarity coefficient, and mean/max surface disagreement values comparable to the average manual contour (86%, 76%, 3.3/17.4 mm automatic vs. 73%, 79%, 2.8/17 mm manual). The AM group was more consistent than the manual group for multiple metrics, most notably reducing the range of contour volume (106-430 mL manual vs. 176-347 mL AM) and percent false positivity (1-37% manual vs. 1-7% AM). Average contouring time savings with the automatic segmentation was 11.5 min per patient, a 35% reduction. Conclusions: Using the STAPLE algorithm to generate 'true' contours from multiple physician contours, we demonstrated that, in comparison with manual segmentation, atlas-based automatic LNS for head-and-neck cancer is accurate, efficient, and reduces interobserver variability.

  12. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  13. Environmental monitoring based on automatic change detection from remotely sensed data: kernel-based approach

    NASA Astrophysics Data System (ADS)

    Shah-Hosseini, Reza; Homayouni, Saeid; Safari, Abdolreza

    2015-01-01

    In the event of a natural disaster, such as a flood or earthquake, using fast and efficient methods for estimating the extent of the damage is critical. Automatic change mapping and estimating are important in order to monitor environmental changes, e.g., deforestation. Traditional change detection (CD) approaches are time consuming, user dependent, and strongly influenced by noise and/or complex spectral classes in a region. Change maps obtained by these methods usually suffer from isolated changed pixels and have low accuracy. To deal with this, an automatic CD framework-which is based on the integration of change vector analysis (CVA) technique, kernel-based C-means clustering (KCMC), and kernel-based minimum distance (KBMD) classifier-is proposed. In parallel with the proposed algorithm, a support vector machine (SVM) CD method is presented and analyzed. In the first step, a differential image is generated via two approaches in high dimensional Hilbert space. Next, by using CVA and automatically determining a threshold, the pseudo-training samples of the change and no-change classes are extracted. These training samples are used for determining the initial value of KCMC parameters and training the SVM-based CD method. Then optimizing a cost function with the nature of geometrical and spectral similarity in the kernel space is employed in order to estimate the KCMC parameters and to select the precise training samples. These training samples are used to train the KBMD classifier. Last, the class label of each unknown pixel is determined using the KBMD classifier and SVM-based CD method. In order to evaluate the efficiency of the proposed algorithm for various remote sensing images and applications, two different datasets acquired by Quickbird and Landsat TM/ETM+ are used. The results show a good flexibility and effectiveness of this automatic CD method for environmental change monitoring. In addition, the comparative analysis of results from the proposed method

  14. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin

    2012-03-01

    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  15. Patch-based label fusion for automatic multi-atlas-based prostate segmentation in MR images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Jani, Ashesh B.; Rossi, Peter J.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    In this paper, we propose a 3D multi-atlas-based prostate segmentation method for MR images, which utilizes patch-based label fusion strategy. The atlases with the most similar appearance are selected to serve as the best subjects in the label fusion. A local patch-based atlas fusion is performed using voxel weighting based on anatomical signature. This segmentation technique was validated with a clinical study of 13 patients and its accuracy was assessed using the physicians' manual segmentations (gold standard). Dice volumetric overlapping was used to quantify the difference between the automatic and manual segmentation. In summary, we have developed a new prostate MR segmentation approach based on nonlocal patch-based label fusion, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

  16. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  17. Automatic 3-D gravity modeling of sedimentary basins with density contrast varying parabolically with depth

    NASA Astrophysics Data System (ADS)

    Chakravarthi, V.; Sundararajan, N.

    2004-07-01

    A method to model 3-D sedimentary basins with density contrast varying with depth is presented along with a code GRAV3DMOD. The measured gravity fields, reduced to a horizontal plane, are assumed to be available at grid nodes of a rectangular/square mesh. Juxtaposed 3-D rectangular/square blocks with their geometrical epicenters on top coincide with grid nodes of a mesh to approximate a sedimentary basin. The algorithm based on Newton's forward difference formula automatically calculates the initial depth estimates of a sedimentary basin assuming that 2-D infinite horizontal slabs among which, the density contrast varies with depth could generate the measured gravity fields. Forward modeling is realized through an available code GR3DPRM, which computes the theoretical gravity field of a 3-D block. The lower boundary of a sedimentary basin is formulated by estimating the depth values of the 3-D blocks within predetermined limits. The algorithm is efficient in the sense that it automatically generates the grid files of the interpreted results that can be viewed in the form of respective contour maps. Measured gravity fields pertaining to the Chintalpudi sub-basin, India and the Los Angeles basin, California, USA in which the density contrast varies with depth are interpreted to show the applicability of the method.

  18. One-Day Offset between Simulated and Observed Daily Hydrographs: An Exploration of the Issue in Automatic Model Calibration

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Leon, L.; Yang, W.

    2014-12-01

    The literature of hydrologic modelling shows that in daily simulation of the rainfall-runoff relationship, the simulated hydrograph response to some rainfall events happens one day earlier than the observed one. This one-day offset issue results in significant residuals between the simulated and observed hydrographs and adversely impacts the model performance metrics that are based on the aggregation of daily residuals. Based on the analysis of sub-daily rainfall and runoff data sets in this study, the one-day offset issue appears to be inevitable when the same time interval, e.g. the calendar day, is used to measure daily rainfall and runoff data sets. This is an error introduced through data aggregation and needs to be properly addressed before calculating the model performance metrics. Otherwise, the metrics would not represent the modelling quality and could mislead the automatic calibration of the model. In this study, an algorithm is developed to scan the simulated hydrograph against the observed one, automatically detect all one-day offset incidents and shift the simulated hydrograph of those incidents one day forward before calculating the performance metrics. This algorithm is employed in the automatic calibration of the Soil and Water Assessment Tool that is set up for the Rouge River watershed in Southern Ontario, Canada. Results show that with the proposed algorithm, the automatic calibration to maximize the daily Nash-Sutcliffe (NS) identifies a solution that accurately estimates the magnitude of peak flow rates and the shape of rising and falling limbs of the observed hydrographs. But, without the proposed algorithm, the same automatic calibration finds a solution that systematically underestimates the peak flow rates in order to perfectly match the timing of simulated and observed peak flows.

  19. Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation.

    PubMed

    Yu, Yanyan; Chen, Yimin; Chiu, Bernard

    2016-07-01

    Prostate segmentation from transrectal ultrasound (TRUS) images plays an important role in the diagnosis and treatment planning of prostate cancer. In this paper, a fully automatic slice-based segmentation method was developed to segment TRUS prostate images. The initial prostate contour was determined using a novel method based on the radial bas-relief (RBR) method, and a false edge removal algorithm proposed here in. 2D slice-based propagation was used in which the contour on each image slice was deformed using a level-set evolution model, which was driven by edge-based and region-based energy fields generated by dyadic wavelet transform. The optimized contour on an image slice propagated to the adjacent slice, and subsequently deformed using the level-set model. The propagation continued until all image slices were segmented. To determine the initial slice where the propagation began, the initial prostate contour was deformed individually on each transverse image. A method was developed to self-assess the accuracy of the deformed contour based on the average image intensity inside and outside of the contour. The transverse image on which highest accuracy was attained was chosen to be the initial slice for the propagation process. Evaluation was performed for 336 transverse images from 15 prostates that include images acquired at mid-gland, base and apex regions of the prostates. The average mean absolute difference (MAD) between algorithm and manual segmentations was 0.79±0.26mm, which is comparable to results produced by previously published semi-automatic segmentation methods. Statistical evaluation shows that accurate segmentation was not only obtained at the mid-gland, but also at the base and apex regions. PMID:27208705

  20. Automatic calibration system for analog instruments based on DSP and CCD sensor

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Wei, Xiangqin; Bai, Zhenlong

    2008-12-01

    Currently, the calibration work of analog measurement instruments is mainly completed by manual and there are many problems waiting for being solved. In this paper, an automatic calibration system (ACS) based on Digital Signal Processor (DSP) and Charge Coupled Device (CCD) sensor is developed and a real-time calibration algorithm is presented. In the ACS, TI DM643 DSP processes the data received by CCD sensor and the outcome is displayed on Liquid Crystal Display (LCD) screen. For the algorithm, pointer region is firstly extracted for improving calibration speed. And then a math model of the pointer is built to thin the pointer and determine the instrument's reading. Through numbers of experiments, the time of once reading is no more than 20 milliseconds while it needs several seconds if it is done manually. At the same time, the error of the instrument's reading satisfies the request of the instruments. It is proven that the automatic calibration system can effectively accomplish the calibration work of the analog measurement instruments.

  1. 3D Fast Automatic Segmentation of Kidney Based on Modified AAM and Random Forest.

    PubMed

    Jin, Chao; Shi, Fei; Xiang, Dehui; Jiang, Xueqing; Zhang, Bin; Wang, Ximing; Zhu, Weifang; Gao, Enting; Chen, Xinjian

    2016-06-01

    In this paper, a fully automatic method is proposed to segment the kidney into multiple components: renal cortex, renal column, renal medulla and renal pelvis, in clinical 3D CT abdominal images. The proposed fast automatic segmentation method of kidney consists of two main parts: localization of renal cortex and segmentation of kidney components. In the localization of renal cortex phase, a method which fully combines 3D Generalized Hough Transform (GHT) and 3D Active Appearance Models (AAM) is applied to localize the renal cortex. In the segmentation of kidney components phase, a modified Random Forests (RF) method is proposed to segment the kidney into four components based on the result from localization phase. During the implementation, a multithreading technology is applied to speed up the segmentation process. The proposed method was evaluated on a clinical abdomen CT data set, including 37 contrast-enhanced volume data using leave-one-out strategy. The overall true-positive volume fraction and false-positive volume fraction were 93.15%, 0.37% for renal cortex segmentation; 83.09%, 0.97% for renal column segmentation; 81.92%, 0.55% for renal medulla segmentation; and 80.28%, 0.30% for renal pelvis segmentation, respectively. The average computational time of segmenting kidney into four components took 20 seconds. PMID:26742124

  2. Automatic Detection and Boundary Extraction of Lunar Craters Based on LOLA DEM Data

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ling, ZongCheng; Zhang, Jiang; Wu, ZhongChen

    2015-07-01

    Impact-induced circular structures, known as craters, are the most obvious geographic and geomorphic features on the Moon. The studies of lunar carters' patterns and spatial distributions play an important role in understanding geologic processes of the Moon. In this paper, we proposed a method based on digital elevation model (DEM) data from lunar orbiter laser altimeter to detect the lunar craters automatically. Firstly, the DEM data of study areas are converted to a series of spatial fields having different scales, in which all overlapping depressions are detected in order (larger depressions first, then the smaller ones). Then, every depression's true boundary is calculated by Fourier expansion and shape parameters are computed. Finally, we recognize the craters from training sets manually and build a binary decision tree to automatically classify the identified depressions into craters and non-craters. In addition, our crater-detection method can provides a fast and reliable evaluation of ages of lunar geologic units, which is of great significance in lunar stratigraphy studies as well as global geologic mapping.

  3. Automatic segmentation of the fetal cerebellum on ultrasound volumes, using a 3D statistical shape model.

    PubMed

    Gutiérrez-Becker, Benjamín; Arámbula Cosío, Fernando; Guzmán Huerta, Mario E; Benavides-Serralde, Jesús Andrés; Camargo-Marín, Lisbeth; Medina Bañuelos, Verónica

    2013-09-01

    Previous work has shown that the segmentation of anatomical structures on 3D ultrasound data sets provides an important tool for the assessment of the fetal health. In this work, we present an algorithm based on a 3D statistical shape model to segment the fetal cerebellum on 3D ultrasound volumes. This model is adjusted using an ad hoc objective function which is in turn optimized using the Nelder-Mead simplex algorithm. Our algorithm was tested on ultrasound volumes of the fetal brain taken from 20 pregnant women, between 18 and 24 gestational weeks. An intraclass correlation coefficient of 0.8528 and a mean Dice coefficient of 0.8 between cerebellar volumes measured using manual techniques and the volumes calculated using our algorithm were obtained. As far as we know, this is the first effort to automatically segment fetal intracranial structures on 3D ultrasound data. PMID:23686392

  4. Semi-Automatic Road/Pavement Modeling using Mobile Laser Scanning

    NASA Astrophysics Data System (ADS)

    Hervieu, A.; Soheilian, B.

    2013-10-01

    Scene analysis, in urban environments, deals with street modeling and understanding. A street mainly consists of roadways, pavements (i.e., walking areas), facades, still and moving obstacles. In this paper, we investigate the surface modeling of roadways and pavements using LIDAR data acquired by a mobile laser scanning (MLS) system. First, road border detection is considered. A system recognizing curbs and curb ramps while reconstructing the missing information in case of occlusion is presented. A user interface scheme is also described, providing an effective tool for semi-automatic processing of large amount of data. Then, based upon road edge information, a process that reconstructs surfaces of roads and pavements has been developed, providing a centimetric precision while reconstructing missing information. This system hence provides an important knowledge of the street, that may open perspectives in various domains such as path planning or road maintenance.

  5. Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Shen, Yuzhong

    2011-03-01

    3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.

  6. Mapping of Planetary Surface Age Based on Crater Statistics Obtained by AN Automatic Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Salih, A. L.; Mühlbauer, M.; Grumpe, A.; Pasckert, J. H.; Wöhler, C.; Hiesinger, H.

    2016-06-01

    The analysis of the impact crater size-frequency distribution (CSFD) is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly) global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to 2.9 Ga) and higher

  7. An automatic registration algorithm for the scattered point clouds based on the curvature feature

    NASA Astrophysics Data System (ADS)

    He, Bingwei; Lin, Zeming; Li, Y. F.

    2013-03-01

    Object modeling by the registration of multiple range images has important applications in reverse engineering and computer vision. In order to register multi-view scattered point clouds, a novel curvature-based automatic registration algorithm is proposed in this paper, which can solve the registration problem with partial overlapping point clouds. For two sets of scattered point clouds, the curvature of each point is estimated by using the quadratic surface fitting method. The feature points that have the maximum local curvature variations are then extracted. The initial matching points are acquired by computing the Hausdorff distance of curvature, and then the circumference shape feature of the local surface is used to obtain the accurate matching points from the initial matching points. Finally, the rotation and translation matrix are estimated by the quaternion, and an iterative algorithm is used to improve the registration accuracy. Experimental results show that the algorithm is effective.

  8. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  9. Automatic meshing of curved three-dimensional domains: Curving finite elements and curvature-based mesh control

    SciTech Connect

    Shephard, M.S.; Dey, S.; Georges, M.K.

    1995-12-31

    Specific issues associated with the automatic generation of finite element meshes for curved geometric domains axe considered. A review of the definition of when a triangulation is a valid mesh, a geometric triangulation, for curved geometric domains is given. Consideration is then given to the additional operations necessary to maintain the validity of a mesh when curved finite elements are employed. A procedure to control the mesh gradations based on the curvature of the geometric model faces is also given.

  10. Fully Automatic Guidance and Control for Rotorcraft Nap-of-the-earth Flight Following Planned Profiles. Volume 2: Mathematical Model

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  11. APPLICATION OF AUTOMATIC DIFFERENTIATION FOR STUDYING THE SENSITIVITY OF NUMERICAL ADVECTION SCHEMES IN AIR QUALITY MODELS

    EPA Science Inventory

    In any simulation model, knowing the sensitivity of the system to the model parameters is of utmost importance. s part of an effort to build a multiscale air quality modeling system for a high performance computing and communication (HPCC) environment, we are exploring an automat...

  12. Development of a software based automatic exposure control system for use in image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Morton, Daniel R.

    Modern image guided radiation therapy involves the use of an isocentrically mounted imaging system to take radiographs of a patient's position before the start of each treatment. Image guidance helps to minimize errors associated with a patients setup, but the radiation dose received by patients from imaging must be managed to ensure no additional risks. The Varian On-Board Imager (OBI) (Varian Medical Systems, Inc., Palo Alto, CA) does not have an automatic exposure control system and therefore requires exposure factors to be manually selected. Without patient specific exposure factors, images may become saturated and require multiple unnecessary exposures. A software based automatic exposure control system has been developed to predict optimal, patient specific exposure factors. The OBI system was modelled in terms of the x-ray tube output and detector response in order to calculate the level of detector saturation for any exposure situation. Digitally reconstructed radiographs are produced via ray-tracing through the patients' volumetric datasets that are acquired for treatment planning. The ray-trace determines the attenuation of the patient and subsequent x-ray spectra incident on the imaging detector. The resulting spectra are used in the detector response model to determine the exposure levels required to minimize detector saturation. Images calculated for various phantoms showed good agreement with the images that were acquired on the OBI. Overall, regions of detector saturation were accurately predicted and the detector response for non-saturated regions in images of an anthropomorphic phantom were calculated to generally be within 5 to 10 % of the measured values. Calculations were performed on patient data and found similar results as the phantom images, with the calculated images being able to determine detector saturation with close agreement to images that were acquired during treatment. Overall, it was shown that the system model and calculation

  13. Sensitivity based segmentation and identification in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Absher, R.

    1984-03-01

    This research program continued an investigation of sensitivity analysis, and its use in the segmentation and identification of the phonetic units of speech, that was initiated during the 1982 Summer Faculty Research Program. The elements of the sensitivity matrix, which express the relative change in each pole of the speech model to a relative change in each coefficient of the characteristic equation, were evaluated for an expanded set of data which consisted of six vowels contained in single words spoken in a simple carrier phrase by five males with differing dialects. The objectives were to evaluate the sensitivity matrix, interpret its changes during the production of the vowels, and to evaluate inter-speaker variations. It was determined that the sensitivity analysis (1) serves to segment the vowel interval, (2) provides a measure of when a vowel is on target, and (3) should provide sufficient information to identify each particular vowel. Based on the results presented, sensitivity analysis should result in more accurate segmentation and identification of phonemes and should provide a practicable framework for incorporation of acoustic-phonetic variance as well as time and talker normalization.

  14. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  15. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  16. A Telesurveillance System With Automatic Electrocardiogram Interpretation Based on Support Vector Machine and Rule-Based Processing

    PubMed Central

    Lin, Ching-Miao; Lai, Feipei; Ho, Yi-Lwun; Hung, Chi-Sheng

    2015-01-01

    Background Telehealth care is a global trend affecting clinical practice around the world. To mitigate the workload of health professionals and provide ubiquitous health care, a comprehensive surveillance system with value-added services based on information technologies must be established. Objective We conducted this study to describe our proposed telesurveillance system designed for monitoring and classifying electrocardiogram (ECG) signals and to evaluate the performance of ECG classification. Methods We established a telesurveillance system with an automatic ECG interpretation mechanism. The system included: (1) automatic ECG signal transmission via telecommunication, (2) ECG signal processing, including noise elimination, peak estimation, and feature extraction, (3) automatic ECG interpretation based on the support vector machine (SVM) classifier and rule-based processing, and (4) display of ECG signals and their analyzed results. We analyzed 213,420 ECG signals that were diagnosed by cardiologists as the gold standard to verify the classification performance. Results In the clinical ECG database from the Telehealth Center of the National Taiwan University Hospital (NTUH), the experimental results showed that the ECG classifier yielded a specificity value of 96.66% for normal rhythm detection, a sensitivity value of 98.50% for disease recognition, and an accuracy value of 81.17% for noise detection. For the detection performance of specific diseases, the recognition model mainly generated sensitivity values of 92.70% for atrial fibrillation, 89.10% for pacemaker rhythm, 88.60% for atrial premature contraction, 72.98% for T-wave inversion, 62.21% for atrial flutter, and 62.57% for first-degree atrioventricular block. Conclusions Through connected telehealth care devices, the telesurveillance system, and the automatic ECG interpretation system, this mechanism was intentionally designed for continuous decision-making support and is reliable enough to reduce the

  17. Automatic 3d Building Model Generation from LIDAR and Image Data Using Sequential Minimum Bounding Rectangle

    NASA Astrophysics Data System (ADS)

    Kwak, E.; Al-Durgham, M.; Habib, A.

    2012-07-01

    Digital Building Model is an important component in many applications such as city modelling, natural disaster planning, and aftermath evaluation. The importance of accurate and up-to-date building models has been discussed by many researchers, and many different approaches for efficient building model generation have been proposed. They can be categorised according to the data source used, the data processing strategy, and the amount of human interaction. In terms of data source, due to the limitations of using single source data, integration of multi-senor data is desired since it preserves the advantages of the involved datasets. Aerial imagery and LiDAR data are among the commonly combined sources to obtain 3D building models with good vertical accuracy from laser scanning and good planimetric accuracy from aerial images. The most used data processing strategies are data-driven and model-driven ones. Theoretically one can model any shape of buildings using data-driven approaches but practically it leaves the question of how to impose constraints and set the rules during the generation process. Due to the complexity of the implementation of the data-driven approaches, model-based approaches draw the attention of the researchers. However, the major drawback of model-based approaches is that the establishment of representative models involves a manual process that requires human intervention. Therefore, the objective of this research work is to automatically generate building models using the Minimum Bounding Rectangle algorithm and sequentially adjusting them to combine the advantages of image and LiDAR datasets.

  18. One-day offset in daily hydrologic modeling: An exploration of the issue in automatic model calibration

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Masoud; Leon, Luis; Yang, Wanhong; Bosch, David

    2016-03-01

    Hydrologic modeling literature illustrates that daily simulation models are incapable of accurately representing hydrograph timing due to relationships between precipitation and watershed hydrologic response that happen with a sub-daily time step in the real world. For watersheds with a time of concentration less than 24 h and a late day precipitation event, the observed hydrographic response frequently occurs one day after the precipitation peak while the model simulates a same day event. The analysis of sub-daily precipitation and runoff in this study suggests that, this one-day offset is inevitable in daily analysis of the precipitation-runoff relationship when the same 24-h time interval, e.g. the calendar day, is used to prepare daily precipitation and runoff datasets. Under these conditions, daily simulation models will fail to emulate this one-day offset issue (1dOI) and result in significant daily residuals between simulated and measured hydrographs. Results of this study show that the automatic calibration of such daily models will be misled by model performance metrics that are based on the aggregation of daily residuals to a solution that systematically underestimate the peak flow rates while trying to emulate the one-day lags. In this study, a novel algorithm called Shifting Hydrograph In order to Fix Timing (SHIFT) is developed to reduce the impact of this one-day offset issue (1dOI) on the parameter estimation of daily simulation models. Results show that with SHIFT the aforementioned automatic calibration finds a solution that accurately estimates the magnitude of daily peak flow rates and the shape of the rising and falling limbs of the daily hydrograph. Moreover, it is shown that this daily calibrated model performs quite well with an alternative daily precipitation dataset that has a minimal number of 1dOIs, concluding that SHIFT can minimize the impact of 1dOI on parameter estimation of daily simulation models.

  19. Controlling Retrieval during Practice: Implications for Memory-Based Theories of Automaticity

    ERIC Educational Resources Information Center

    Wilkins, Nicolas J.; Rawson, Katherine A.

    2011-01-01

    Memory-based processing theories of automaticity assume that shifts from algorithmic to retrieval-based processing underlie practice effects on response times. The current work examined the extent to which individuals can exert control over the involvement of retrieval during skill acquisition and the factors that may influence control. In two…

  20. Speech Recognition-based and Automaticity Programs to Help Students with Severe Reading and Spelling Problems

    ERIC Educational Resources Information Center

    Higgins, Eleanor L.; Raskind, Marshall H.

    2004-01-01

    This study was conducted to assess the effectiveness of two programs developed by the Frostig Center Research Department to improve the reading and spelling of students with learning disabilities (LD): a computer Speech Recognition-based Program (SRBP) and a computer and text-based Automaticity Program (AP). Twenty-eight LD students with reading…

  1. Automatic selection of ROIs in functional imaging using Gaussian mixture models.

    PubMed

    Górriz, J M; Lassl, A; Ramírez, J; Salas-Gonzalez, D; Puntonet, C G; Lang, E W

    2009-08-28

    We present an automatic method for selecting regions of interest (ROIs) of the information contained in three-dimensional functional brain images using Gaussian mixture models (GMMs), where each Gaussian incorporates a contiguous brain region with similar activation. The novelty of the approach is based on approximating the grey-level distribution of a brain image by a sum of Gaussian functions, whose parameters are determined by a maximum likelihood criterion via the expectation maximization (EM) algorithm. Each Gaussian or cluster is represented by a multivariate Gaussian function with a center coordinate and a certain shape. This approach leads to a drastic compression of the information contained in the brain image and serves as a starting point for a variety of possible feature extraction methods for the diagnosis of brain diseases. PMID:19454303

  2. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    NASA Astrophysics Data System (ADS)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  3. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  4. Automatic vehicle detection based on automatic histogram-based fuzzy C-means algorithm and perceptual grouping using very high-resolution aerial imagery and road vector data

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Gökaşar, Ilgın

    2016-01-01

    This study presents an approach for the automatic detection of vehicles using very high-resolution images and road vector data. Initially, road vector data and aerial images are integrated to extract road regions. Then, the extracted road/street region is clustered using an automatic histogram-based fuzzy C-means algorithm, and edge pixels are detected using the Canny edge detector. In order to automatically detect vehicles, we developed a local perceptual grouping approach based on fusion of edge detection and clustering outputs. To provide the locality, an ellipse is generated using characteristics of the candidate clusters individually. Then, ratio of edge pixels to nonedge pixels in the corresponding ellipse is computed to distinguish the vehicles. Finally, a point-merging rule is conducted to merge the points that satisfy a predefined threshold and are supposed to denote the same vehicles. The experimental validation of the proposed method was carried out on six very high-resolution aerial images that illustrate two highways, two shadowed roads, a crowded narrow street, and a street in a dense urban area with crowded parked vehicles. The evaluation of the results shows that our proposed method performed 86% and 83% in overall correctness and completeness, respectively.

  5. Automatic Segmentation of Wrist Bones in CT Using a Statistical Wrist Shape + Pose Model.

    PubMed

    Anas, Emran Mohammad Abu; Rasoulian, Abtin; Seitel, Alexander; Darras, Kathryn; Wilson, David; John, Paul St; Pichora, David; Mousavi, Parvin; Rohling, Robert; Abolmaesumi, Purang

    2016-08-01

    Segmentation of the wrist bones in CT images has been frequently used in different clinical applications including arthritis evaluation, bone age assessment and image-guided interventions. The major challenges include non-uniformity and spongy textures of the bone tissue as well as narrow inter-bone spaces. In this work, we propose an automatic wrist bone segmentation technique for CT images based on a statistical model that captures the shape and pose variations of the wrist joint across 60 example wrists at nine different wrist positions. To establish the correspondences across the training shapes at neutral positions, the wrist bone surfaces are jointly aligned using a group-wise registration framework based on a Gaussian Mixture Model. Principal component analysis is then used to determine the major modes of shape variations. The variations in poses not only across the population but also across different wrist positions are incorporated in two pose models. An intra-subject pose model is developed by utilizing the similarity transforms at all wrist positions across the population. Further, an inter-subject pose model is used to model the pose variations across different wrist positions. For segmentation of the wrist bones in CT images, the developed model is registered to the edge point cloud extracted from the CT volume through an expectation maximization based probabilistic approach. Residual registration errors are corrected by application of a non-rigid registration technique. We validate the proposed segmentation method by registering the wrist model to a total of 66 unseen CT volumes of average voxel size of 0.38 mm. We report a mean surface distance error of 0.33 mm and a mean Jaccard index of 0.86. PMID:26890640

  6. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  7. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology. PMID:27273293

  8. A magnetic resonance image based atlas of the rabbit brain for automatic parcellation.

    PubMed

    Muñoz-Moreno, Emma; Arbat-Plana, Ariadna; Batalle, Dafnis; Soria, Guadalupe; Illa, Miriam; Prats-Galino, Alberto; Eixarch, Elisenda; Gratacos, Eduard

    2013-01-01

    Rabbit brain has been used in several works for the analysis of neurodevelopment. However, there are not specific digital rabbit brain atlases that allow an automatic identification of brain regions, which is a crucial step for various neuroimage analyses, and, instead, manual delineation of areas of interest must be performed in order to evaluate a specific structure. For this reason, we propose an atlas of the rabbit brain based on magnetic resonance imaging, including both structural and diffusion weighted, that can be used for the automatic parcellation of the rabbit brain. Ten individual atlases, as well as an average template and probabilistic maps of the anatomical regions were built. In addition, an example of automatic segmentation based on this atlas is described. PMID:23844007

  9. An automatic abrupt information extraction method based on singular value decomposition and higher-order statistics

    NASA Astrophysics Data System (ADS)

    He, Tian; Ye, Wu; Pan, Qiang; Liu, Xiandong

    2016-02-01

    One key aspect of local fault diagnosis is how to effectively extract abrupt features from the vibration signals. This paper proposes a method to automatically extract abrupt information based on singular value decomposition and higher-order statistics. In order to observe the distribution law of singular values, a numerical analysis to simulate the noise, periodic signal, abrupt signal and singular value distribution is conducted. Based on higher-order statistics and spectrum analysis, a method to automatically choose the upper and lower borders of the singular value interval reflecting the abrupt information is built. And the selected singular values derived from this method are used to reconstruct abrupt signals. It is proven that the method is able to obtain accurate results by processing the rub-impact fault signal measured from the experiments. The analytical and experimental results indicate that the proposed method is feasible for automatically extracting abrupt information caused by faults like the rotor-stator rub-impact.

  10. BioASF: a framework for automatically generating executable pathway models specified in BioPAX

    PubMed Central

    Haydarlou, Reza; Jacobsen, Annika; Bonzanni, Nicola; Feenstra, K. Anton; Abeln, Sanne; Heringa, Jaap

    2016-01-01

    Motivation: Biological pathways play a key role in most cellular functions. To better understand these functions, diverse computational and cell biology researchers use biological pathway data for various analysis and modeling purposes. For specifying these biological pathways, a community of researchers has defined BioPAX and provided various tools for creating, validating and visualizing BioPAX models. However, a generic software framework for simulating BioPAX models is missing. Here, we attempt to fill this gap by introducing a generic simulation framework for BioPAX. The framework explicitly separates the execution model from the model structure as provided by BioPAX, with the advantage that the modelling process becomes more reproducible and intrinsically more modular; this ensures natural biological constraints are satisfied upon execution. The framework is based on the principles of discrete event systems and multi-agent systems, and is capable of automatically generating a hierarchical multi-agent system for a given BioPAX model. Results: To demonstrate the applicability of the framework, we simulated two types of biological network models: a gene regulatory network modeling the haematopoietic stem cell regulators and a signal transduction network modeling the Wnt/β-catenin signaling pathway. We observed that the results of the simulations performed using our framework were entirely consistent with the simulation results reported by the researchers who developed the original models in a proprietary language. Availability and Implementation: The framework, implemented in Java, is open source and its source code, documentation and tutorial are available at http://www.ibi.vu.nl/programs/BioASF. Contact: j.heringa@vu.nl PMID:27307645

  11. Study on automatic airborne image positioning model and its application in FY-3A airborne experiment

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yang, Zhongdong; Guan, Min; Zhang, Liyang; Wang, Tiantian

    2009-08-01

    This paper addresses the issue on airborne image positioning model and its application in FY-3A experiment. First, the FY-3A Medium Resolution Spectral Imager (MERSI)'s viewing vector is derived from MERSI's imaging pattern. Then, the image positioning model is analyzed mathematically in detail which is based on Earth-aircraft geometry. The model parameters are mainly determined by both the sensor - aircraft alignment and the onboard discrete measurements of the positioning and orientation. Flight trials are flown at an altitude of 8300 m over the Qinghai Lake China. It is shown that the image positioning accuracy (about 1~4 pixels) is better than previous methods (more than 7 pixels, [G. J. Jedlovec et al. NASA Technical Memorandum TM - 100352 (1989) and D. P. Roy et al. Int. J. Rem. Sens. 18(9), 1865 - 1887 (1997)]). It is also shown that the model has the potential to hold the image positioning errors within one pixel. The model can operate automatically, and does not need ground control points data. Since our algorithm get the image positioning results through an observation geometric perspective which is in computing the point at which the sensor viewing vector intersects the earth surface, our algorithm assumes the airborne data are from the plain area.

  12. Three Modeling Applications to Promote Automatic Item Generation for Examinations in Dentistry.

    PubMed

    Lai, Hollis; Gierl, Mark J; Byrne, B Ellen; Spielman, Andrew I; Waldschmidt, David M

    2016-03-01

    Test items created for dentistry examinations are often individually written by content experts. This approach to item development is expensive because it requires the time and effort of many content experts but yields relatively few items. The aim of this study was to describe and illustrate how items can be generated using a systematic approach. Automatic item generation (AIG) is an alternative method that allows a small number of content experts to produce large numbers of items by integrating their domain expertise with computer technology. This article describes and illustrates how three modeling approaches to item content-item cloning, cognitive modeling, and image-anchored modeling-can be used to generate large numbers of multiple-choice test items for examinations in dentistry. Test items can be generated by combining the expertise of two content specialists with technology supported by AIG. A total of 5,467 new items were created during this study. From substitution of item content, to modeling appropriate responses based upon a cognitive model of correct responses, to generating items linked to specific graphical findings, AIG has the potential for meeting increasing demands for test items. Further, the methods described in this study can be generalized and applied to many other item types. Future research applications for AIG in dental education are discussed. PMID:26933110

  13. Automatic atlas-based volume estimation of human brain regions from MR images

    SciTech Connect

    Andreasen, N.C.; Rajarethinam, R.; Cizadlo, T.; Arndt, S.

    1996-01-01

    MRI offers many opportunities for noninvasive in vivo measurement of structure-function relationships in the human brain. Although automated methods are now available for whole-brain measurements, an efficient and valid automatic method for volume estimation of subregions such as the frontal or temporal lobes is still needed. We adapted the Talairach atlas to the study of brain subregions. We supplemented the atlas with additional boxes to include the cerebellum. We assigned all the boxes to 1 of 12 regions of interest (ROIs) (frontal, parietal, temporal, and occipital lobes, cerebellum, and subcortical regions on right and left sides of the brain).Using T1-weighted MR scans collected with an SPGR sequence (slice thickness = 1.5 mm), we manually traced these ROIs and produced volume estimates. We then transformed the scans into Talairach space and compared the volumes produced by the two methods ({open_quotes}traced{close_quotes} versus {open_quotes}automatic{close_quotes}). The traced measurements were considered to be the {open_quotes}gold standard{close_quotes} against which the automatic measurements were compared. The automatic method was found to produce measurements that were nearly identical to the traced method. We compared absolute measurements of volume produced by the two methods, as well as the sensitivity and specificity of the automatic method. We also compared the measurements of cerebral blood flow obtained through [{sup 15}O]H{sub 2}O PET studies in a sample of nine subjects. Absolute measurements of volume produced by the two methods were very similar, and the sensitivity and specificity of the automatic method were found to be high for all regions. The flow values were also found to be very similar by both methods. The automatic atlas-based method for measuring the volume of brain subregions produces results that are similar to manual techniques. 39 refs., 4 figs., 3 tabs.

  14. The feasibility of atlas-based automatic segmentation of MRI for H&N radiotherapy planning.

    PubMed

    Wardman, Kieran; Prestwich, Robin J D; Gooding, Mark J; Speight, Richard J

    2016-01-01

    Atlas-based autosegmentation is an established tool for segmenting structures for CT-planned head and neck radiotherapy. MRI is being increasingly integrated into the planning process. The aim of this study is to assess the feasibility of MRI-based, atlas-based autosegmentation for organs at risk (OAR) and lymph node levels, and to compare the segmentation accuracy with CT-based autosegmentation. Fourteen patients with locally advanced head and neck cancer in a prospective imaging study underwent a T1-weighted MRI and a PET-CT (with dedicated contrast-enhanced CT) in an immobilization mask. Organs at risk (orbits, parotids, brainstem, and spinal cord) and the left level II lymph node region were manually delineated on the CT and MRI separately. A 'leave one out' approach was used to automatically segment structures onto the remaining images separately for CT and MRI. Contour comparison was performed using multiple positional metrics: Dice index, mean distance to conformity (MDC), sensitivity index (Se Idx), and inclusion index (Incl Idx). Automatic segmentation using MRI of orbits, parotids, brainstem, and lymph node level was acceptable with a DICE coefficient of 0.73-0.91, MDC 2.0-5.1mm, Se Idx 0.64-0.93, Incl Idx 0.76-0.93. Segmentation of the spinal cord was poor (Dice coefficient 0.37). The process of automatic segmentation was significantly better on MRI compared to CT for orbits, parotid glands, brainstem, and left lymph node level II by multiple positional metrics; spinal cord segmentation based on MRI was inferior compared with CT. Accurate atlas-based automatic segmentation of OAR and lymph node levels is feasible using T1-MRI; segmentation of the spinal cord was found to be poor. Comparison with CT-based automatic segmentation suggests that the process is equally as, or more accurate, using MRI. These results support further translation of MRI-based segmentation methodology into clinicalpractice. PMID:27455480

  15. A Corpus-Based Approach for Automatic Thai Unknown Word Recognition Using Boosting Techniques

    NASA Astrophysics Data System (ADS)

    Techo, Jakkrit; Nattee, Cholwich; Theeramunkong, Thanaruk

    While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.

  16. Automatic calibration of space based manipulators and mechanisms

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1988-01-01

    Four tasks in manipulator kinematic calibration are summarized. Calibration of a seven degree of freedom manipulator was simulated. A calibration model is presented that can be applied on a closed-loop robot. It is an expansion of open-loop kinematic calibration algorithms subject to constraints. A closed-loop robot with a five-bar linkage transmission was tested. Results show that the algorithm converges within a few iterations. The concept of model differences is formalized. Differences are categorized as structural and numerical, with emphasis on the structural. The work demonstrates that geometric manipulators can be visualized as points in a vector space with the dimension of the space depending solely on the number and type of manipulator joint. Visualizing parameters in a kinematic model as the coordinates locating the manipulator in vector space enables a standard evaluation of the models. Key results include a derivation of the maximum number of parameters necessary for models, a formal discussion on the inclusion of extra parameters, and a method to predetermine a minimum model structure for a kinematic manipulator. A technique is presented that enables single point sensors to gather sufficient information to complete a calibration.

  17. Automatic generation of fuzzy rules for the sensor-based navigation of a mobile robot

    SciTech Connect

    Pin, F.G.; Watanabe, Y.

    1994-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called {open_quotes}Fuzzy Behaviorist,{close_quotes} and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed {open_quotes}Fuzzy Behaviorist{close_quotes} approach.

  18. Sensor-based navigation of a mobile robot using automatically constructed fuzzy rules

    SciTech Connect

    Watanabe, Y.; Pin, F.G.

    1993-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called ``Fuzzy Behaviorist,`` and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed ``Fuzzy Behaviorist`` approach.

  19. Automatic hearing loss detection system based on auditory brainstem response

    NASA Astrophysics Data System (ADS)

    Aldonate, J.; Mercuri, C.; Reta, J.; Biurrun, J.; Bonell, C.; Gentiletti, G.; Escobar, S.; Acevedo, R.

    2007-11-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.

  20. Automatic visible and infrared face registration based on silhouette matching and robust transformation estimation

    NASA Astrophysics Data System (ADS)

    Tian, Tian; Mei, Xiaoguang; Yu, Yang; Zhang, Can; Zhang, Xiaoye

    2015-03-01

    Registration of multi-sensor data (particularly visible color sensors and infrared sensors) is a prerequisite for multimodal image analysis such as image fusion. In this paper, we proposed an automatic registration technique for visible and infrared face images based on silhouette matching and robust transformation estimation. The key idea is to represent a (visible or infrared) face image by its silhouette which is extracted from the image's edge map and consists of a set of discrete points, and then align the two silhouette point sets by using their feature similarity and spatial geometrical information. More precisely, our algorithm first matches the silhouette point sets by their local shape features such as shape context, which creates a set of putative correspondences that may contaminated by outliers. Next, we estimate the accurate transformation from the putative correspondence set under a robust maximum likelihood framework combining with the EM algorithm, where the transformation between the image pair is modeled by a parametric model such as the rigid or affine transformation. The qualitative and quantitative comparisons on a publicly available database demonstrate that our method significantly outperforms other state-of-the-art visible/infrared face registration methods. As a result, our method will be beneficial for fusion-based face recognition.

  1. Template-based automatic breast segmentation on MRI by excluding the chest region

    SciTech Connect

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  2. Template-based automatic breast segmentation on MRI by excluding the chest region

    PubMed Central

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Chan, Siwa; Chen, Siping; Su, Min-Ying

    2013-01-01

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  3. FishCam - A semi-automatic video-based monitoring system of fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2016-04-01

    One of the main objectives of the Water Framework Directive is to preserve and restore the continuum of river networks. Regarding vertebrate migration, fish passes are widely used measure to overcome anthropogenic constructions. Functionality of this measure needs to be verified by monitoring. In this study we propose a newly developed monitoring system, named FishCam, to observe fish migration especially in fish passes without contact and without imposing stress on fish. To avoid time and cost consuming field work for fish pass monitoring, this project aims to develop a semi-automatic monitoring system that enables a continuous observation of fish migration. The system consists of a detection tunnel and a high resolution camera, which is mainly based on the technology of security cameras. If changes in the image, e.g. by migrating fish or drifting particles, are detected by a motion sensor, the camera system starts recording and continues until no further motion is detectable. An ongoing key challenge in this project is the development of robust software, which counts, measures and classifies the passing fish. To achieve this goal, many different computer vision tasks and classification steps have to be combined. Moving objects have to be detected and separated from the static part of the image, objects have to be tracked throughout the entire video and fish have to be separated from non-fish objects (e.g. foliage and woody debris, shadows and light reflections). Subsequently, the length of all detected fish needs to be determined and fish should be classified into species. The object classification in fish and non-fish objects is realized through ensembles of state-of-the-art classifiers on a single image per object. The choice of the best image for classification is implemented through a newly developed "fish benchmark" value. This value compares the actual shape of the object with a schematic model of side-specific fish. To enable an automatization of the

  4. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    NASA Astrophysics Data System (ADS)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily

  5. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  6. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  7. Evaluating Automatic Speech Recognition-Based Language Learning Systems: A Case Study

    ERIC Educational Resources Information Center

    van Doremalen, Joost; Boves, Lou; Colpaert, Jozef; Cucchiarini, Catia; Strik, Helmer

    2016-01-01

    The purpose of this research was to evaluate a prototype of an automatic speech recognition (ASR)-based language learning system that provides feedback on different aspects of speaking performance (pronunciation, morphology and syntax) to students of Dutch as a second language. We carried out usability reviews, expert reviews and user tests to…

  8. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  9. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  10. Wavelet-based semi-automatic live-wire segmentation

    NASA Astrophysics Data System (ADS)

    Haenselmann, Thomas; Effelsberg, Wolfgang

    2003-06-01

    The live-wire approach is a well-known algorithm based on a graph search to locate boundaries for image segmentation. We will extend the original cost function, which is solely based on finding strong edges, so that the approach can take a large variety of boundaries into account. The cost function adapts to the local characteristics of a boundary by analyzing a user-defined sample using a continuous wavelet decomposition. We will finally extend the approach into 3D in order to segment objects in volumetric data, e. g., from medical CT and MR scans.

  11. Automatic Topology Derivation from Ifc Building Model for In-Door Intelligent Navigation

    NASA Astrophysics Data System (ADS)

    Tang, S. J.; Zhu, Q.; Wang, W. W.; Zhang, Y. T.

    2015-05-01

    With the goal to achieve an accuracy navigation within the building environment, it is critical to explore a feasible way for building the connectivity relationships among 3D geographical features called in-building topology network. Traditional topology construction approaches for indoor space always based on 2D maps or pure geometry model, which remained information insufficient problem. Especially, an intelligent navigation for different applications depends mainly on the precise geometry and semantics of the navigation network. The trouble caused by existed topology construction approaches can be smoothed by employing IFC building model which contains detailed semantic and geometric information. In this paper, we present a method which combined a straight media axis transformation algorithm (S-MAT) with IFC building model to reconstruct indoor geometric topology network. This derived topology aimed at facilitating the decision making for different in-building navigation. In this work, we describe a multi-step deviation process including semantic cleaning, walkable features extraction, Multi-Storey 2D Mapping and S-MAT implementation to automatically generate topography information from existing indoor building model data given in IFC.

  12. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types. PMID:18000328

  13. Automatic generation of skeletal mechanisms for ignition combustion based on level of importance analysis

    SciTech Connect

    Loevaas, Terese

    2009-07-15

    A level of importance (LOI) selection parameter is employed in order to identify species with general low importance to the overall accuracy of a chemical model. This enables elimination of the minor reaction paths in which these species are involved. The generation of such skeletal mechanisms is performed automatically in a pre-processing step ranking species according to their level of importance. This selection criterion is a combined parameter based on a time scale and sensitivity analysis, identifying both short lived species and species with respect to which the observable of interest has low sensitivity. In this work a careful element flux analysis demonstrates that such species do not interact in major reaction paths. Employing the LOI procedure replaces the previous method of identifying redundant species through a two step procedure involving a reaction flow analysis followed by a sensitivity analysis. The flux analysis is performed using DARS {sup copyright}, a digital analysis tool modelling reactive systems. Simplified chemical models are generated based on a detailed ethylene mechanism involving 111 species and 784 reactions (1566 forward and backward reactions) proposed by Wang et al. Eliminating species from detailed mechanisms introduces errors in the predicted combustion parameters. In the present work these errors are systematically studied for a wide range of conditions, including temperature, pressure and mixtures. Results show that the accuracy of simplified models is particularly lowered when the initial temperatures are close to the transition between low- and high-temperature chemistry. A speed-up factor of 5 is observed when using a simplified model containing only 27% of the original species and 19% of the original reactions. (author)

  14. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  15. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations

    PubMed Central

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-01-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  16. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. PMID:26433028

  17. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  18. Automatic Adjustment of Wide-Base Google Street View Panoramas

    NASA Astrophysics Data System (ADS)

    Boussias-Alexakis, E.; Tsironisa, V.; Petsa, E.; Karras, G.

    2016-06-01

    This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  19. An automatic damage detection algorithm based on the Short Time Impulse Response Function

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Carlo Ponzo, Felice; Ditommaso, Rocco; Iacovino, Chiara

    2016-04-01

    also during the strong motion phase. This approach helps to overcome the limitation derived from the use of techniques based on simple Fourier Transform that provide good results when the response of the monitored system is stationary, but fails when the system exhibits a non-stationary behaviour. The main advantage derived from the use of the proposed approach for Structural Health Monitoring is based on the simplicity of the interpretation of the nonlinear variations of the fundamental frequency. The proposed methodology has been tested on numerical models of reinforced concrete structures designed for only gravity loads without and with the presence of infill panels. In order to verify the effectiveness of the proposed approach for the automatic evaluation of the fundamental frequency over time, the results of an experimental campaign of shaking table tests conducted at the seismic laboratory of University of Basilicata (SISLAB) have been used. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''. References Ditommaso, R., Mucciarelli, M., Ponzo, F.C. (2012) Analysis of non-stationary structural systems by using a band-variable filter. Bulletin of Earthquake Engineering. DOI: 10.1007/s10518-012-9338-y.

  20. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  1. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  2. Automatic lane keeping of a vehicle based on perception net

    NASA Astrophysics Data System (ADS)

    Boo, Kwangsuck; Jung, Moonyoung

    2000-10-01

    The objective of this research is to monitor and control the vehicle motion in order to remove out the existing safety risk based upon the human-machine cooperative vehicle control. A predictive control method is proposed to control the steering wheel of the vehicle to keep the lane. Desired angle of the steering wheel to control the vehicle motion could be calculated based upon vehicle dynamics, current and estimated pose of the vehicle every sample steps. The vehicle pose and the road curvature were calculated by geometrically fusing sensor data from camera image, tachometer and steering wheel encoder through the Perception Net, where not only the state variables, but also the corresponding uncertainties were propagated in forward and backward direction in such a way to satisfy the given constraint condition, maintain consistency, reduce the uncertainties, and guarantee robustness. A series of experiments was conducted to evaluate the control performance, in which a car like robot was utilized to quit unwanted safety problem. As the results, the robot was keeping very well a given lane with arbitrary shape at moderate speed.

  3. Optimal feature point selection and automatic initialization in active shape model search.

    PubMed

    Lekadir, Karim; Yang, Guang-Zhong

    2008-01-01

    This paper presents a novel approach for robust and fully automatic segmentation with active shape model search. The proposed method incorporates global geometric constraints during feature point search by using interlandmark conditional probabilities. The A* graph search algorithm is adapted to identify in the image the optimal set of valid feature points. The technique is extended to enable reliable and fast automatic initialization of the ASM search. Validation with 2-D and 3-D MR segmentation of the left ventricular epicardial border demonstrates significant improvement in robustness and overall accuracy, while eliminating the need for manual initialization. PMID:18979776

  4. Automatic Method of Supernovae Classification by Modeling Human Procedure of Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Módolo, Marcelo; Rosa, Reinaldo; Guimaraes, Lamartine N. F.

    2016-07-01

    The classification of a recently discovered supernova must be done as quickly as possible in order to define what information will be captured and analyzed in the following days. This classification is not trivial and only a few experts astronomers are able to perform it. This paper proposes an automatic method that models the human procedure of classification. It uses Multilayer Perceptron Neural Networks to analyze the supernovae spectra. Experiments were performed using different pre-processing and multiple neural network configurations to identify the classic types of supernovae. Significant results were obtained indicating the viability of using this method in places that have no specialist or that require an automatic analysis.

  5. Development of automatic target recognition for infrared sensor-based close-range land mine detector

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Garcia, Sigberto A.; Cloud, Eugene L.; Duvoisin, Herbert A., III; Long, Daniel T.; Hackett, Jay K.

    1995-06-01

    Infrared imagery scenes change continuously with environmental conditions. Strategic targets embedded in them are often difficult to be identified with the naked eye. An IR sensor-based mine detector must include Automatic Target Recognition (ATR) to detect and extract land mines from IR scenes. In the course of the ATR development process, mine signature data were collected using a commercial 8-12 (mu) spectral range FLIR, model Inframetrics 445L, and a commercial 3-5 (mu) starting focal planar array FLIR, model Infracam. These sensors were customized to the required field-of-view for short range operation. These baseline data were then input into a specialized parallel processor on which the mine detection algorithm is developed and trained. The ATR is feature-based and consists of several subprocesses to progress from raw input IR imagery to a neural network classifier for final nomination of the targets. Initially, image enhancement is used to remove noise and sensor artifact. Three preprocessing techniques, namely model-based segmentation, multi-element prescreener, and geon detector are then applied to extract specific features of the targets and to reject all objects that do not resemble mines. Finally, to further reduce the false alarm rate, the extracted features are presented to the neural network classifier. Depending on the operational circumstances, one of three neural network techniques will be adopted; back propagation, supervised real-time learning, or unsupervised real-time learning. The Close Range IR Mine Detection System is an Army program currently being experimentally developed to be demonstrated in the Army's Advanced Technology Demonstration in FY95. The ATR resulting from this program will be integrated in the 21st Century Land Warrior program in which the mine avoidance capability is its primary interest.

  6. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  7. A Zipfian Model of an Automatic Bibliographic System: An Application to MEDLINE.

    ERIC Educational Resources Information Center

    Fedorowicz, Jane

    1982-01-01

    Derives the underlying structure of the Zipf distribution, with emphasis on its application to word frequencies in the inverted files of automatic bibliographic systems, and applies the Zipfian model to the National Library of Medicine's MEDLINE database. An appendix on the Zipfian mean and 12 references are included. (Author/JL)

  8. Automatic location of facial feature points and synthesis of facial sketches using direct combined model.

    PubMed

    Tu, Ching-Ting; Lien, Jenn-Jier James

    2010-08-01

    Automatically locating multiple feature points (i.e., the shape) in a facial image and then synthesizing the corresponding facial sketch are highly challenging since facial images typically exhibit a wide range of poses, expressions, and scales, and have differing degrees of illumination and/or occlusion. When the facial sketches are to be synthesized in the unique sketching style of a particular artist, the problem becomes even more complex. To resolve these problems, this paper develops an automatic facial sketch synthesis system based on a novel direct combined model (DCM) algorithm. The proposed system executes three cascaded procedures, namely, 1) synthesis of the facial shape from the input texture information (i.e., the facial image); 2) synthesis of the exaggerated facial shape from the synthesized facial shape; and 3) synthesis of a sketch from the original input image and the synthesized exaggerated shape. Previous proposals for reconstructing facial shapes and synthesizing the corresponding facial sketches are heavily reliant on the quality of the texture reconstruction results, which, in turn, are highly sensitive to occlusion and lighting effects in the input image. However, the DCM approach proposed in this paper accurately reconstructs the facial shape and then produces lifelike synthesized facial sketches without the need to recover occluded feature points or to restore the texture information lost as a result of unfavorable lighting conditions. Moreover, the DCM approach is capable of synthesizing facial sketches from input images with a wide variety of facial poses, gaze directions, and facial expressions even when such images are not included within the original training data set. PMID:19933007

  9. A new approach for automatic sleep scoring: Combining Taguchi based complex-valued neural network and complex wavelet transform.

    PubMed

    Peker, Musa

    2016-06-01

    Automatic classification of sleep stages is one of the most important methods used for diagnostic procedures in psychiatry and neurology. This method, which has been developed by sleep specialists, is a time-consuming and difficult process. Generally, electroencephalogram (EEG) signals are used in sleep scoring. In this study, a new complex classifier-based approach is presented for automatic sleep scoring using EEG signals. In this context, complex-valued methods were utilized in the feature selection and classification stages. In the feature selection stage, features of EEG data were extracted with the help of a dual tree complex wavelet transform (DTCWT). In the next phase, five statistical features were obtained. These features are classified using complex-valued neural network (CVANN) algorithm. The Taguchi method was used in order to determine the effective parameter values in this CVANN. The aim was to develop a stable model involving parameter optimization. Different statistical parameters were utilized in the evaluation phase. Also, results were obtained in terms of two different sleep standards. In the study in which a 2nd level DTCWT and CVANN hybrid model was used, 93.84% accuracy rate was obtained according to the Rechtschaffen & Kales (R&K) standard, while a 95.42% accuracy rate was obtained according to the American Academy of Sleep Medicine (AASM) standard. Complex-valued classifiers were found to be promising in terms of the automatic sleep scoring and EEG data. PMID:26787511

  10. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  11. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  12. Towards an Automatic and Application-Based EigensolverSelection

    SciTech Connect

    Zhang, Yeliang; Li, Xiaoye S.; Marques, Osni

    2005-09-09

    The computation of eigenvalues and eigenvectors is an important and often time-consuming phase in computer simulations. Recent efforts in the development of eigensolver libraries have given users good algorithms without the need for users to spend much time in programming. Yet, given the variety of numerical algorithms that are available to domain scientists, choosing the ''best'' algorithm suited for a particular application is a daunting task. As simulations become increasingly sophisticated and larger, it becomes infeasible for a user to try out every reasonable algorithm configuration in a timely fashion. Therefore, there is a need for an intelligent engine that can guide the user through the maze of various solvers with various configurations. In this paper, we present a methodology and a software architecture aiming at determining the best solver based on the application type and the matrix properties. We combine a decision tree and an intelligent engine to select a solver and a preconditioner combination for the application submitted by the user. We also discuss how our system interface is implemented with third party numerical libraries. In the case study, we demonstrate the feasibility and usefulness of our system with a simplified linear solving system. Our experiments show that our proposed intelligent engine is quite adept in choosing a suitable algorithm for different applications.

  13. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction. PMID:24623466

  14. Interpretable Probabilistic Latent Variable Models for Automatic Annotation of Clinical Text

    PubMed Central

    Kotov, Alexander; Hasan, Mehedi; Carcone, April; Dong, Ming; Naar-King, Sylvie; BroganHartlieb, Kathryn

    2015-01-01

    We propose Latent Class Allocation (LCA) and Discriminative Labeled Latent Dirichlet Allocation (DL-LDA), two novel interpretable probabilistic latent variable models for automatic annotation of clinical text. Both models separate the terms that are highly characteristic of textual fragments annotated with a given set of labels from other non-discriminative terms, but rely on generative processes with different structure of latent variables. LCA directly learns class-specific multinomials, while DL-LDA breaks them down into topics (clusters of semantically related words). Extensive experimental evaluation indicates that the proposed models outperform Naïve Bayes, a standard probabilistic classifier, and Labeled LDA, a state-of-the-art topic model for labeled corpora, on the task of automatic annotation of transcripts of motivational interviews, while the output of the proposed models can be easily interpreted by clinical practitioners. PMID:26958214

  15. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    ERIC Educational Resources Information Center

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden (2001)…

  16. Preservation of memory-based automaticity in reading for older adults.

    PubMed

    Rawson, Katherine A; Touron, Dayna R

    2015-12-01

    Concerning age-related effects on cognitive skill acquisition, the modal finding is that older adults do not benefit from practice to the same extent as younger adults in tasks that afford a shift from slower algorithmic processing to faster memory-based processing. In contrast, Rawson and Touron (2009) demonstrated a relatively rapid shift to memory-based processing in the context of a reading task. The current research extended beyond this initial study to provide more definitive evidence for relative preservation of memory-based automaticity in reading tasks for older adults. Younger and older adults read short stories containing unfamiliar noun phrases (e.g., skunk mud) followed by disambiguating information indicating the combination's meaning (either the normatively dominant meaning or an alternative subordinate meaning). Stories were repeated across practice blocks, and then the noun phrases were presented in novel sentence frames in a transfer task. Both age groups shifted from computation to retrieval after relatively few practice trials (as evidenced by convergence of reading times for dominant and subordinate items). Most important, both age groups showed strong evidence for memory-based processing of the noun phrases in the transfer task. In contrast, older adults showed minimal shifting to retrieval in an alphabet arithmetic task, indicating that the preservation of memory-based automaticity in reading was task-specific. Discussion focuses on important implications for theories of memory-based automaticity in general and for specific theoretical accounts of age effects on memory-based automaticity, as well as fruitful directions for future research. PMID:26302027

  17. Performing Label-Fusion-Based Segmentation Using Multiple Automatically Generated Templates

    PubMed Central

    Chakravarty, M. Mallar; Steadman, Patrick; van Eede, Matthijs C.; Calcott, Rebecca D.; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D. Louis; Lerch, Jason P.

    2016-01-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). PMID:22611030

  18. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  19. Performing label-fusion-based segmentation using multiple automatically generated templates.

    PubMed

    Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P

    2013-10-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). PMID:22611030

  20. Effective Key Parameter Determination for an Automatic Approach to Land Cover Classification Based on Multispectral Remote Sensing Imagery

    PubMed Central

    Wang, Yong; Jiang, Dong; Zhuang, Dafang; Huang, Yaohuan; Wang, Wei; Yu, Xinfang

    2013-01-01

    The classification of land cover based on satellite data is important for many areas of scientific research. Unfortunately, some traditional land cover classification methods (e.g. known as supervised classification) are very labor-intensive and subjective because of the required human involvement. Jiang et al. proposed a simple but robust method for land cover classification using a prior classification map and a current multispectral remote sensing image. This new method has proven to be a suitable classification method; however, its drawback is that it is a semi-automatic method because the key parameters cannot be selected automatically. In this study, we propose an approach in which the two key parameters are chosen automatically. The proposed method consists primarily of the following three interdependent parts: the selection procedure for the pure-pixel training-sample dataset, the method to determine the key parameters, and the optimal combination model. In this study, the proposed approach employs both overall accuracy and their Kappa Coefficients (KC), and Time-Consumings (TC, unit: second) in order to select the two key parameters automatically instead of using a test-decision, which avoids subjective bias. A case study of Weichang District of Hebei Province, China, using Landsat-5/TM data of 2010 with 30 m spatial resolution and prior classification map of 2005 recognised as relatively precise data, was conducted to test the performance of this method. The experimental results show that the methodology determining the key parameters uses the portfolio optimisation model and increases the degree of automation of Jiang et al.'s classification method, which may have a wide scope of scientific application. PMID:24204582

  1. Calibration of the Hydrological Simulation Program Fortran (HSPF) model using automatic calibration and geographical information systems

    NASA Astrophysics Data System (ADS)

    Al-Abed, N. A.; Whiteley, H. R.

    2002-11-01

    Calibrating a comprehensive, multi-parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water-quantity parameters of the HSPF version 10·11 using the automatic-calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30 and 80° 57W longitude and 42° 51 and 44° 31N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981-85 was 67 m3 s-1 and the average measured discharge at Brantford was 70 m3 s-1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration.

  2. Automatic detection of volcano-seismic events by modeling state and event duration in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Bhatti, Sohail Masood; Khan, Muhammad Salman; Wuth, Jorge; Huenupan, Fernando; Curilem, Millaray; Franco, Luis; Yoma, Nestor Becerra

    2016-09-01

    In this paper we propose an automatic volcano event detection system based on Hidden Markov Model (HMM) with state and event duration models. Since different volcanic events have different durations, therefore the state and whole event durations learnt from the training data are enforced on the corresponding state and event duration models within the HMM. Seismic signals from the Llaima volcano are used to train the system. Two types of events are employed in this study, Long Period (LP) and Volcano-Tectonic (VT). Experiments show that the standard HMMs can detect the volcano events with high accuracy but generates false positives. The results presented in this paper show that the incorporation of duration modeling can lead to reductions in false positive rate in event detection as high as 31% with a true positive accuracy equal to 94%. Further evaluation of the false positives indicate that the false alarms generated by the system were mostly potential events based on the signal-to-noise ratio criteria recommended by a volcano expert.

  3. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %. PMID:27086033

  4. Applications of hydrologic information automatically extracted from digital elevation models

    USGS Publications Warehouse

    Jenson, S.K.

    1991-01-01

    Digital elevation models (DEMs) can be used to derive a wealth of information about the morphology of a land surface. Traditional raster analysis methods can be used to derive slope, aspect, and shaded relief information; recently-developed computer programs can be used to delineate depressions, overland flow paths, and watershed boundaries. These methods were used to delineate watershed boundaries for a geochemical stream sediment survey, to compare the results of extracting slope and flow paths from DEMs of varying resolutions, and to examine the geomorphology of a Martian DEM. -Author

  5. Automatic generation of computable implementation guides from clinical information models.

    PubMed

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. PMID:25910958

  6. Automatic Endocardium Contour Tracing Method Using Standard Left Ventricles Shape Model

    NASA Astrophysics Data System (ADS)

    Horie, Masahiro; Kashima, Masayuki; Sato, Kiminori; Watanabe, Mutsumi

    The necessity of ultrasonic diagnosis tools increases every year. We propose an automatic endocardium tracing method by applying prepared “Standard Left Ventricles Shape Model (SLVSM)”. The cross section of heart wall in ultrasonic image is decided depending on the position and the angle of this probe. The initial contour is adaptively determined as crossing curve line between the SLVSM and the cross section. And the endocardium contour is extracted by active contour model(ACM) in two stages. In the first stage, an endocardium contour is detected using the result of an edge extraction based on the separability of image features. In the second stage, the endocardium contour is extracted using shape correction processing. “Mitral valve processing” not only detects the position of the mitral valve at the end diastolic period, but also corrects the detected contour after the first stage of ACM. Experimental results using one healthy case and three diseased cases have shown the effectiveness of the proposed method.

  7. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  8. Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, ACM; Fonseca, Jaime C.; Correia-Pinto, Jorge; Vilaça, João. L.

    2014-03-01

    Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82+/-5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7+/-4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

  9. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  10. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light.

    PubMed

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-28

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm(-2) and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications. PMID:27076202

  11. Automatic restoration of motion blurred image based on frequency and cepstrum domain

    NASA Astrophysics Data System (ADS)

    Xu, Li; Gao, Xiaoyu; Fang, Tian

    2015-10-01

    The motion blur is one of the common factors leading to blurred images, the parameters of the point spread function (PSF) estimation is the key and prerequisite of motion blurred image restoration. Based on motion blur image characteristics of spectrum and cepstrum analysis, a automatic detection algorithm based on frequency domain and cepstrum domain algorithms is proposed in the paper, which can automatically detect the blur length and blur angle, then we can restorate the motion blur image. Experiments show that when the blur length is 15 ~ 80 pixels noiselessly, In addition to the individual blur length/angle (e.g. 30 pixels/300, 75 pixels/300), blur length estimation error is 0 ~ 0.2 pixels and blur angle estimation error is almost 0. The detection range is greater than some other methods, and the quality of image restoration is good.

  12. Automatic Three-Dimensional Measurement of Large-Scale Structure Based on Vision Metrology

    PubMed Central

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods. PMID:24701143

  13. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods. PMID:24701143

  14. An automatic stain removal algorithm of series aerial photograph based on flat-field correction

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Yan, Dongmei; Yang, Yang

    2010-10-01

    The dust on the camera's lens will leave dark stains on the image. Calibrating and compensating the intensity of the stained pixels play an important role in the airborne image processing. This article introduces an automatic compensation algorithm for the dark stains. It's based on the theory of flat-field correction. We produced a whiteboard reference image by aggregating hundreds of images recorded in one flight and use their average pixel values to simulate the uniform white light irradiation. Then we constructed a look-up table function based on this whiteboard image to calibrate the stained image. The experiment result shows that the proposed procedure can remove lens stains effectively and automatically.

  15. Automatic face detection and tracking based on Adaboost with camshift algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Long, JianFeng

    2011-10-01

    With the development of information technology, video surveillance is widely used in security monitoring and identity recognition. For most of pure face tracking algorithms are hard to specify the initial location and scale of face automatically, this paper proposes a fast and robust method to detect and track face by combining adaboost with camshift algorithm. At first, the location and scale of face is specified by adaboost algorithm based on Haar-like features and it will be conveyed to the initial search window automatically. Then, we apply camshift algorithm to track face. The experimental results based on OpenCV software yield good results, even in some special circumstances, such as light changing and face rapid movement. Besides, by drawing out the tracking trajectory of face movement, some abnormal behavior events can be analyzed.

  16. Automatic detection of sleep macrostructure based on a sensorized T-shirt.

    PubMed

    Bianchi, Anna M; Mendez, Martin O

    2010-01-01

    In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals. PMID:21096842

  17. Automatic Resolution of Ambiguous Terms Based on Machine Learning and Conceptual Relations in the UMLS

    PubMed Central

    Liu, Hongfang; Johnson, Stephen B.; Friedman, Carol

    2002-01-01

    Motivation. The UMLS has been used in natural language processing applications such as information retrieval and information extraction systems. The mapping of free-text to UMLS concepts is important for these applications. To improve the mapping, we need a method to disambiguate terms that possess multiple UMLS concepts. In the general English domain, machine-learning techniques have been applied to sense-tagged corpora, in which senses (or concepts) of ambiguous terms have been annotated (mostly manually). Sense disambiguation classifiers are then derived to determine senses (or concepts) of those ambiguous terms automatically. However, manual annotation of a corpus is an expensive task. We propose an automatic method that constructs sense-tagged corpora for ambiguous terms in the UMLS using MEDLINE abstracts. Methods. For a term W that represents multiple UMLS concepts, a collection of MEDLINE abstracts that contain W is extracted. For each abstract in the collection, occurrences of concepts that have relations with W as defined in the UMLS are automatically identified. A sense-tagged corpus, in which senses of W are annotated, is then derived based on those identified concepts. The method was evaluated on a set of 35 frequently occurring ambiguous biomedical abbreviations using a gold standard set that was automatically derived. The quality of the derived sense-tagged corpus was measured using precision and recall. Results. The derived sense-tagged corpus had an overall precision of 92.9% and an overall recall of 47.4%. After removing rare senses and ignoring abbreviations with closely related senses, the overall precision was 96.8% and the overall recall was 50.6%. Conclusions. UMLS conceptual relations and MEDLINE abstracts can be used to automatically acquire knowledge needed for resolving ambiguity when mapping free-text to UMLS concepts. PMID:12386113

  18. Integrating spatial altimetry data into the automatic calibration of hydrological models

    NASA Astrophysics Data System (ADS)

    Getirana, Augusto C. V.

    2010-06-01

    SummaryThe automatic calibration of hydrological models has traditionally been performed using gauged data. However, inaccessibility to remote areas and lack of financial support cause data to be lacking in large tropical basins, such as the Amazon basin. Advances in the acquisition, processing and availability of spatially distributed remotely sensed data move the evaluation of computational models easier and more practical. This paper presents the pioneering integration of spatial altimetry data into the automatic calibration of a hydrological model. The study area is the Branco River basin, located in the Northern Amazon basin. An empirical stage × discharge relation is obtained for the Negro River and transposed to the Branco River, which enables the correlation of spatial altimetry data with water discharge derived from the MGB-IPH hydrological model. Six scenarios are created combining two sets of objective functions with three different datasets. Two of them are composed of ENVISAT altimetric data, and the third one is derived from daily gauged discharges. The MOCOM-UA multi-criteria global optimization algorithm is used to optimize the model parameters. The calibration process is validated with gauged discharge at three gauge stations located along the Branco River and two tributaries. Results demonstrate that the combination of virtual stations along the river can provide reasonable parameters. Further, the considerably reduced number of observations provided by the satellite is not a restriction to the automatic calibration, deriving performance coefficients similar to those obtained with the process using daily gauged data.

  19. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light

    NASA Astrophysics Data System (ADS)

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-01

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications.Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00759g

  20. An automatic method for CASP9 free modeling structure prediction assessment

    PubMed Central

    Cong, Qian; Kinch, Lisa N.; Pei, Jimin; Shi, Shuoyong; Grishin, Vyacheslav N.; Li, Wenlin; Grishin, Nick V.

    2011-01-01

    Motivation: Manual inspection has been applied to and is well accepted for assessing critical assessment of protein structure prediction (CASP) free modeling (FM) category predictions over the years. Such manual assessment requires expertise and significant time investment, yet has the problems of being subjective and unable to differentiate models of similar quality. It is beneficial to incorporate the ideas behind manual inspection to an automatic score system, which could provide objective and reproducible assessment of structure models. Results: Inspired by our experience in CASP9 FM category assessment, we developed an automatic superimposition independent method named Quality Control Score (QCS) for structure prediction assessment. QCS captures both global and local structural features, with emphasis on global topology. We applied this method to all FM targets from CASP9, and overall the results showed the best agreement with Manual Inspection Scores among automatic prediction assessment methods previously applied in CASPs, such as Global Distance Test Total Score (GDT_TS) and Contact Score (CS). As one of the important components to guide our assessment of CASP9 FM category predictions, this method correlates well with other scoring methods and yet is able to reveal good-quality models that are missed by GDT_TS. Availability: The script for QCS calculation is available at http://prodata.swmed.edu/QCS/. Contact: grishin@chop.swmed.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21994223

  1. Automatic reconstruction of physiological gestures used in a model of birdsong production.

    PubMed

    Boari, Santiago; Perl, Yonatan Sanz; Amador, Ana; Margoliash, Daniel; Mindlin, Gabriel B

    2015-11-01

    Highly coordinated learned behaviors are key to understanding neural processes integrating the body and the environment. Birdsong production is a widely studied example of such behavior in which numerous thoracic muscles control respiratory inspiration and expiration: the muscles of the syrinx control syringeal membrane tension, while upper vocal tract morphology controls resonances that modulate the vocal system output. All these muscles have to be coordinated in precise sequences to generate the elaborate vocalizations that characterize an individual's song. Previously we used a low-dimensional description of the biomechanics of birdsong production to investigate the associated neural codes, an approach that complements traditional spectrographic analysis. The prior study used algorithmic yet manual procedures to model singing behavior. In the present work, we present an automatic procedure to extract low-dimensional motor gestures that could predict vocal behavior. We recorded zebra finch songs and generated synthetic copies automatically, using a biomechanical model for the vocal apparatus and vocal tract. This dynamical model described song as a sequence of physiological parameters the birds control during singing. To validate this procedure, we recorded electrophysiological activity of the telencephalic nucleus HVC. HVC neurons were highly selective to the auditory presentation of the bird's own song (BOS) and gave similar selective responses to the automatically generated synthetic model of song (AUTO). Our results demonstrate meaningful dimensionality reduction in terms of physiological parameters that individual birds could actually control. Furthermore, this methodology can be extended to other vocal systems to study fine motor control. PMID:26378204

  2. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    NASA Astrophysics Data System (ADS)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  3. The opportunity cost model: automaticity, individual differences, and self-control resources.

    PubMed

    Hagger, Martin S

    2013-12-01

    I contend that Kurzban et al.'s model is silent on three issues. First, the extent to which opportunity-cost computations are automatic or deliberative is unclear. Second, the role of individual differences in biasing opportunity-cost computations needs elucidating. Third, in the absence of "next-best" tasks, task persistence will be indefinite, which seems unfeasible, so perhaps integration with a limited-resource account is necessary. PMID:24304785

  4. Applications of the automatic change detection for disaster monitoring by the knowledge-based framework

    NASA Astrophysics Data System (ADS)

    Tadono, T.; Hashimoto, S.; Onosato, M.; Hori, M.

    2012-11-01

    Change detection is a fundamental approach in utilization of satellite remote sensing image, especially in multi-temporal analysis that involves for example extracting damaged areas by a natural disaster. Recently, the amount of data obtained by Earth observation satellites has increased significantly owing to the increasing number and types of observing sensors, the enhancement of their spatial resolution, and improvements in their data processing systems. In applications for disaster monitoring, in particular, fast and accurate analysis of broad geographical areas is required to facilitate efficient rescue efforts. It is expected that robust automatic image interpretation is necessary. Several algorithms have been proposed in the field of automatic change detection in past, however they are still lack of robustness for multi purposes, an instrument independency, and accuracy better than a manual interpretation. We are trying to develop a framework for automatic image interpretation using ontology-based knowledge representation. This framework permits the description, accumulation, and use of knowledge drawn from image interpretation. Local relationships among certain concepts defined in the ontology are described as knowledge modules and are collected in the knowledge base. The knowledge representation uses a Bayesian network as a tool to describe various types of knowledge in a uniform manner. Knowledge modules are synthesized and used for target-specified inference. The results applied to two types of disasters by the framework without any modification and tuning are shown in this paper.

  5. Automatic identification of shallow landslides based on Worldview2 remote sensing images

    NASA Astrophysics Data System (ADS)

    Ma, Hai-Rong; Cheng, Xinwen; Chen, Lianjun; Zhang, Haitao; Xiong, Hongwei

    2016-01-01

    Automatic identification of landslides based on remote sensing images is important for investigating disasters and producing hazard maps. We propose a method to detect shallow landslides automatically using Wordview2 images. Features such as high soil brightness and low vegetation coverage can help identify shallow landslides on remote sensing images. Therefore, soil brightness and vegetation index were chosen as indexes for landslide remote sensing. The back scarp of a landslide can form dark shadow areas on the landslide mass, affecting the accuracy of landslide extraction. To eliminate this effect, the shadow index was chosen as an index. The first principal component (PC1) contained >90% of the image information; therefore, this was also selected as an index. The four selected indexes were used to synthesize a new image wherein information on shallow landslides was enhanced, while other background information was suppressed. Then, PC1 was extracted from the new synthetic image, and an automatic threshold segmentation algorithm was used for segmenting the image to obtain similar landslide areas. Based on landslide features such as slope, shape, and area, nonlandslide areas were eliminated. Finally, four experimental sites were used to verify the feasibility of the developed method.

  6. Automatic segmentation and classification of gestational sac based on mean sac diameter using medical ultrasound image

    NASA Astrophysics Data System (ADS)

    Khazendar, Shan; Farren, Jessica; Al-Assam, Hisham; Sayasneh, Ahmed; Du, Hongbo; Bourne, Tom; Jassim, Sabah A.

    2014-05-01

    Ultrasound is an effective multipurpose imaging modality that has been widely used for monitoring and diagnosing early pregnancy events. Technology developments coupled with wide public acceptance has made ultrasound an ideal tool for better understanding and diagnosing of early pregnancy. The first measurable signs of an early pregnancy are the geometric characteristics of the Gestational Sac (GS). Currently, the size of the GS is manually estimated from ultrasound images. The manual measurement involves multiple subjective decisions, in which dimensions are taken in three planes to establish what is known as Mean Sac Diameter (MSD). The manual measurement results in inter- and intra-observer variations, which may lead to difficulties in diagnosis. This paper proposes a fully automated diagnosis solution to accurately identify miscarriage cases in the first trimester of pregnancy based on automatic quantification of the MSD. Our study shows a strong positive correlation between the manual and the automatic MSD estimations. Our experimental results based on a dataset of 68 ultrasound images illustrate the effectiveness of the proposed scheme in identifying early miscarriage cases with classification accuracies comparable with those of domain experts using K nearest neighbor classifier on automatically estimated MSDs.

  7. Sparse deconvolution method for ultrasound images based on automatic estimation of reference signals.

    PubMed

    Jin, Haoran; Yang, Keji; Wu, Shiwei; Wu, Haiteng; Chen, Jian

    2016-04-01

    Sparse deconvolution is widely used in the field of non-destructive testing (NDT) for improving the temporal resolution. Generally, the reference signals involved in sparse deconvolution are measured from the reflection echoes of standard plane block, which cannot accurately describe the acoustic properties at different spatial positions. Therefore, the performance of sparse deconvolution will deteriorate, due to the deviations in reference signals. Meanwhile, it is inconvenient for automatic ultrasonic NDT using manual measurement of reference signals. To overcome these disadvantages, a modified sparse deconvolution based on automatic estimation of reference signals is proposed in this paper. By estimating the reference signals, the deviations would be alleviated and the accuracy of sparse deconvolution is therefore improved. Based on the automatic estimation of reference signals, regional sparse deconvolution is achievable by decomposing the whole B-scan image into small regions of interest (ROI), and the image dimensionality is significantly reduced. Since the computation time of proposed method has a power dependence on the signal length, the computation efficiency is therefore improved significantly with this strategy. The performance of proposed method is demonstrated using immersion measurement of scattering targets and steel block with side-drilled holes. The results verify that the proposed method is able to maintain the vertical resolution enhancement and noise-suppression capabilities in different scenarios. PMID:26773787

  8. Automatic seizure detection based on the activity of a set of current dipoles: first steps.

    PubMed

    Gritsch, G; Hartmann, M; Perko, H; Fürbaß, F; Kluge, T

    2012-01-01

    In this paper we show advantages of using an advanced montage scheme with respect to the performance of automatic seizure detection systems. The main goal is to find the best performing montage scheme for our automatic seizure detection system. The new virtual montage is a fix set of dipoles within the brain. The current density signals for these dipoles are derived from the scalp EEG signals based on a smart linear transformation. The reason for testing an alternative approach is that traditional montages (reference, bipolar) have some limitations, e.g. the detection performance depends on the choice of the reference electrode and an extraction of spatial information is often demanding. In this paper we explain the detailed setup of how to adapt a modern seizure detection system to use current density signals. Furthermore, we show results concerning the detection performance of different montage schemes and their combination. PMID:23366519

  9. Automatic stress-relieving music recommendation system based on photoplethysmography-derived heart rate variability analysis.

    PubMed

    Shin, Il-Hyung; Cha, Jaepyeong; Cheon, Gyeong Woo; Lee, Choonghee; Lee, Seung Yup; Yoon, Hyung-Jin; Kim, Hee Chan

    2014-01-01

    This paper presents an automatic stress-relieving music recommendation system (ASMRS) for individual music listeners. The ASMRS uses a portable, wireless photoplethysmography module with a finger-type sensor, and a program that translates heartbeat signals from the sensor to the stress index. The sympathovagal balance index (SVI) was calculated from heart rate variability to assess the user's stress levels while listening to music. Twenty-two healthy volunteers participated in the experiment. The results have shown that the participants' SVI values are highly correlated with their prespecified music preferences. The sensitivity and specificity of the favorable music classification also improved as the number of music repetitions increased to 20 times. Based on the SVI values, the system automatically recommends favorable music lists to relieve stress for individuals. PMID:25571461

  10. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  11. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  12. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    PubMed

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery. PMID:26577253

  13. Investment appraisal of automatic milking and conventional milking technologies in a pasture-based dairy system.

    PubMed

    Shortall, J; Shalloo, L; Foley, C; Sleator, R D; O'Brien, B

    2016-09-01

    The successful integration of automatic milking (AM) systems and grazing has resulted in AM becoming a feasible alternative to conventional milking (CM) in pasture-based systems. The objective of this study was to identify the profitability of AM in a pasture-based system, relative to CM herringbone parlors with 2 different levels of automation, across 2 farm sizes, over a 10-yr period following initial investment. The scenarios which were evaluated were (1) a medium farm milking 70 cows twice daily, with 1 AM unit, a 12-unit CM medium-specification (MS) parlor and a 12-unit CM high-specification (HS) parlor, and (2) a large farm milking 140 cows twice daily with 2 AM units, a 20-unit CM MS parlor and a 20-unit CM HS parlor. A stochastic whole-farm budgetary simulation model combined capital investment costs and annual labor and maintenance costs for each investment scenario, with each scenario evaluated using multiple financial metrics, such as annual net profit, annual net cash flow, total discounted net profitability, total discounted net cash flow, and return on investment. The capital required for each investment was financed from borrowings at an interest rate of 5% and repaid over 10-yr, whereas milking equipment and building infrastructure were depreciated over 10 and 20 yr, respectively. A supporting labor audit (conducted on both AM and CM farms) showed a 36% reduction in labor demand associated with AM. However, despite this reduction in labor, MS CM technologies consistently achieved greater profitability, irrespective of farm size. The AM system achieved intermediate profitability at medium farm size; it was 0.5% less profitable than HS technology at the large farm size. The difference in profitability was greatest in the years after the initial investment. This study indicated that although milking with AM was less profitable than MS technologies, it was competitive when compared with a CM parlor of similar technology. PMID:27423956

  14. An automatic method for producing robust regression models from hyperspectral data using multiple simple genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sykas, Dimitris; Karathanassi, Vassilia

    2015-06-01

    This paper presents a new method for automatically determining the optimum regression model, which enable the estimation of a parameter. The concept lies on the combination of k spectral pre-processing algorithms (SPPAs) that enhance spectral features correlated to the desired parameter. Initially a pre-processing algorithm uses as input a single spectral signature and transforms it according to the SPPA function. A k-step combination of SPPAs uses k preprocessing algorithms serially. The result of each SPPA is used as input to the next SPPA, and so on until the k desired pre-processed signatures are reached. These signatures are then used as input to three different regression methods: the Normalized band Difference Regression (NDR), the Multiple Linear Regression (MLR) and the Partial Least Squares Regression (PLSR). Three Simple Genetic Algorithms (SGAs) are used, one for each regression method, for the selection of the optimum combination of k SPPAs. The performance of the SGAs is evaluated based on the RMS error of the regression models. The evaluation not only indicates the selection of the optimum SPPA combination but also the regression method that produces the optimum prediction model. The proposed method was applied on soil spectral measurements in order to predict Soil Organic Matter (SOM). In this study, the maximum value assigned to k was 3. PLSR yielded the highest accuracy while NDR's accuracy was satisfactory compared to its complexity. MLR method showed severe drawbacks due to the presence of noise in terms of collinearity at the spectral bands. Most of the regression methods required a 3-step combination of SPPAs for achieving the highest performance. The selected preprocessing algorithms were different for each regression method since each regression method handles with a different way the explanatory variables.

  15. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  16. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints. PMID:19743409

  17. Computer Vision Based Automatic Extraction and Thickness Measurement of Deep Cervical Flexor from Ultrasonic Images

    PubMed Central

    Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun

    2016-01-01

    Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of the answers for that problem. Another difficulty is to compensate information loss that happened during such image processing procedures. Many morphologically motivated image processing algorithms are applied for that purpose. The proposed method is verified as successful in extracting DCFs and measuring thicknesses in experiment using two hundred 800 × 600 DICOM ultrasonography images with 98.5% extraction rate. Also, the thickness of DCFs automatically measured by this software has small difference (less than 0.3 cm) for 89.8% of extracted DCFs. PMID:26949411

  18. Research on large spatial coordinate automatic measuring system based on multilateral method

    NASA Astrophysics Data System (ADS)

    Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui

    2015-10-01

    To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.

  19. Computer Vision Based Automatic Extraction and Thickness Measurement of Deep Cervical Flexor from Ultrasonic Images.

    PubMed

    Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun

    2016-01-01

    Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of the answers for that problem. Another difficulty is to compensate information loss that happened during such image processing procedures. Many morphologically motivated image processing algorithms are applied for that purpose. The proposed method is verified as successful in extracting DCFs and measuring thicknesses in experiment using two hundred 800 × 600 DICOM ultrasonography images with 98.5% extraction rate. Also, the thickness of DCFs automatically measured by this software has small difference (less than 0.3 cm) for 89.8% of extracted DCFs. PMID:26949411

  20. Semi-automatic inspecting instrument for watch escape wheel based on machine vision

    NASA Astrophysics Data System (ADS)

    Wang, Zhong; Wang, Zhen-wei; Zhang, Jin; Cai, Zhen-xing; Liu, Xin-bo

    2011-12-01

    Escape wheel as a typical precision micro-machinery part is one of the most precision parts in one mechanical watch. A new inspecting instrument based on machine vision technology used to achieve semi-automatic inspection of watch escape wheel is introduced in this paper. This instrument makes use of high resolution CCD sensor and independent designed lens as the imaging system. It can not only achieve to image an area with 7mm diameter once, but also has the resolving power in micrometer and cooperates with two-dimensional moving station to achieve a continuous and automatic measurement of the work pieces placed in array type. In which, the following aspects are highlighted: measuring princeple and process, the basic components of array type measuring workbench, positioning process and verticality, parallelism and other precision adjusting mechanism. Cooperating with novelty escape wheel preparation tool this instrument forms an array type semi-automatic measuring mode. At present, the instrument has been successfully running in the industry field.

  1. Automatic hypermnesia and impaired recollection in schizophrenia.

    PubMed

    Linscott, R J; Knight, R G

    2001-10-01

    Evidence from studies of nonmnemonic automatic cognitive processes provides reason to expect that schizophrenia is associated with exaggerated automatic memory (implicit memory), or automatic hypermnesia. Participants with schizophrenia (n = 22) and control participants (n = 26) were compared on word stem completion (WSC) and list discrimination (LD) tasks administered using the process dissociation procedure. Unadjusted, extended measurement model and dual-process signal-detection methods were used to estimate recollection and automatic memory indices. Schizophrenia was associated with automatic hypermnesia on the WSC task and impaired recollection on both tasks. Thought disorder was associated with even greater automatic hypermnesia. The absence of automatic hypermnesia on the LD task was interpreted with reference to the neuropsychological bases of context and content memory. PMID:11761047

  2. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  3. Automaticity in acute ischemia: Bifurcation analysis of a human ventricular model

    NASA Astrophysics Data System (ADS)

    Bouchard, Sylvain; Jacquemet, Vincent; Vinet, Alain

    2011-01-01

    Acute ischemia (restriction in blood supply to part of the heart as a result of myocardial infarction) induces major changes in the electrophysiological properties of the ventricular tissue. Extracellular potassium concentration ([Ko+]) increases in the ischemic zone, leading to an elevation of the resting membrane potential that creates an “injury current” (IS) between the infarcted and the healthy zone. In addition, the lack of oxygen impairs the metabolic activity of the myocytes and decreases ATP production, thereby affecting ATP-sensitive potassium channels (IKatp). Frequent complications of myocardial infarction are tachycardia, fibrillation, and sudden cardiac death, but the mechanisms underlying their initiation are still debated. One hypothesis is that these arrhythmias may be triggered by abnormal automaticity. We investigated the effect of ischemia on myocyte automaticity by performing a comprehensive bifurcation analysis (fixed points, cycles, and their stability) of a human ventricular myocyte model [K. H. W. J. ten Tusscher and A. V. Panfilov, Am. J. Physiol. Heart Circ. Physiol.AJPHAP0363-613510.1152/ajpheart.00109.2006 291, H1088 (2006)] as a function of three ischemia-relevant parameters [Ko+], IS, and IKatp. In this single-cell model, we found that automatic activity was possible only in the presence of an injury current. Changes in [Ko+] and IKatp significantly altered the bifurcation structure of IS, including the occurrence of early-after depolarization. The results provide a sound basis for studying higher-dimensional tissue structures representing an ischemic heart.

  4. An automatic recognition method of pointer instrument based on improved Hough transform

    NASA Astrophysics Data System (ADS)

    Xu, Li; Fang, Tian; Gao, Xiaoyu

    2015-10-01

    For the automatic recognition of pointer instrument, the method for the automatic recognition of pointer instrument based on improved Hough Transform was proposed in this paper. The automatic recognition of pointer instrument is applied to all kinds of lighting conditions, but the accuracy of it binaryzation will be influenced when the light is too strong or too dark. Therefore, the improved Ostu method was suggested to realize recognition for adaptive thresholding of pointer instrument under all kinds of lighting conditions. On the basis of dial image characteristics, Otsu method is used to get the value of maximum between-cluster variance and initial threshold than analyze its maximum between-cluster variance value to determine the light and shade of the image. When the images are too bright or too dark, the smaller pixels should be given up and then calculate the initial threshold by Otsu method again and again until the best binaryzation image was obtained. Hence, transform the pointer straight line of the binaryzation image to Hough parameter space through improved Hough Transform to determine the position of the pointer straight line by searching the maximum value of arrays of the same angle. Finally, according to angle method, the pointer reading was obtained by the linear relationship for the initial scale and angle of the pointer instrument. Results show that the improved Otsu method make pointer instrument possible to obtained the accuracy binaryzation image even though the light is too bright or too dark , which improves the adaptability of pointer instrument to automatic recognize the light under different conditions. For the pressure gauges with range of 60MPa, the relative error identification reached to 0.005 when use the improved Hough Transform Algorithm.

  5. Automatic and Quantitative Measurement of Collagen Gel Contraction Using Model-Guided Segmentation.

    PubMed

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R; Zhao, Chunfeng; Amadio, Peter C; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behaviors and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model (DCM) which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods. PMID:24092954

  6. Two-Stage Automatic Calibration and Predictive Uncertainty Analysis of a Semi-distributed Watershed Model

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Radcliffe, D. E.; Doherty, J.

    2004-12-01

    Automatic calibration has been applied to conceptual rainfall-runoff models for more than three decades, usually to lumped models. Even when a (semi-)distributed model that allows spatial variability of parameters is calibrated using an automated process, the parameters of the model are often lumped over space so that the model is simplified as a lumped model. Our objective was to develop a two-stage routine for automatically calibrating the Soil Water Assessment Tool (SWAT, a semi-distributed watershed model) that would find the optimal values for the model parameters, preserve the spatial variability in essential parameters, and lead to a measure of the model prediction uncertainty. In the first stage of this proposed calibration scheme, a global search method, namely, the Shuffled Complex Evolution (SCE-UA) method, was employed to find the ``best'' values for the lumped model parameters. That is, in order to limit the number of the calibrated parameters, the model parameters were assumed to be invariant over different subbasins and hydrologic response units (HRU, the basic calculation unit in the SWAT model). However, in the second stage, the spatial variability of the original model parameters was restored and the number of the calibrated parameters was dramatically increased (from a few to near a hundred). Hence, a local search method, namely, a variation of Levenberg-Marquart method, was preferred to find the more distributed set of parameters using the results of the previous stage as starting values. Furthermore, in order to prevent the parameters from taking extreme values, a strategy called ``regularization'' was adopted, through which the distributed parameters were constrained to vary as little as possible from the initial values of the lumped parameters. We calibrated the stream flow in the Etowah River measured at Canton, GA (a watershed area of 1,580 km2) for the years 1983-1992 and used the years 1993-2001 for validation. Calibration for daily and

  7. Noise Robust Feature Scheme for Automatic Speech Recognition Based on Auditory Perceptual Mechanisms

    NASA Astrophysics Data System (ADS)

    Cai, Shang; Xiao, Yeming; Pan, Jielin; Zhao, Qingwei; Yan, Yonghong

    Mel Frequency Cepstral Coefficients (MFCC) are the most popular acoustic features used in automatic speech recognition (ASR), mainly because the coefficients capture the most useful information of the speech and fit well with the assumptions used in hidden Markov models. As is well known, MFCCs already employ several principles which have known counterparts in the peripheral properties of human hearing: decoupling across frequency, mel-warping of the frequency axis, log-compression of energy, etc. It is natural to introduce more mechanisms in the auditory periphery to improve the noise robustness of MFCC. In this paper, a k-nearest neighbors based frequency masking filter is proposed to reduce the audibility of spectra valleys which are sensitive to noise. Besides, Moore and Glasberg's critical band equivalent rectangular bandwidth (ERB) expression is utilized to determine the filter bandwidth. Furthermore, a new bandpass infinite impulse response (IIR) filter is proposed to imitate the temporal masking phenomenon of the human auditory system. These three auditory perceptual mechanisms are combined with the standard MFCC algorithm in order to investigate their effects on ASR performance, and a revised MFCC extraction scheme is presented. Recognition performances with the standard MFCC, RASTA perceptual linear prediction (RASTA-PLP) and the proposed feature extraction scheme are evaluated on a medium-vocabulary isolated-word recognition task and a more complex large vocabulary continuous speech recognition (LVCSR) task. Experimental results show that consistent robustness against background noise is achieved on these two tasks, and the proposed method outperforms both the standard MFCC and RASTA-PLP.

  8. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  9. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  10. Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali

    2006-03-01

    An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.

  11. Automatic organ segmentation on torso CT images by using content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Watanabe, Atsuto; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-02-01

    This paper presents a fast and robust segmentation scheme that automatically identifies and extracts a massive-organ region on torso CT images. In contrast to the conventional algorithms that are designed empirically for segmenting a specific organ based on traditional image processing techniques, the proposed scheme uses a fully data-driven approach to accomplish a universal solution for segmenting the different massive-organ regions on CT images. Our scheme includes three processing steps: machine-learning-based organ localization, content-based image (reference) retrieval, and atlas-based organ segmentation techniques. We applied this scheme to automatic segmentations of heart, liver, spleen, left and right kidney regions on non-contrast CT images respectively, which are still difficult tasks for traditional segmentation algorithms. The segmentation results of these organs are compared with the ground truth that manually identified by a medical expert. The Jaccard similarity coefficient between the ground truth and automated segmentation result centered on 67% for heart, 81% for liver, 78% for spleen, 75% for left kidney, and 77% for right kidney. The usefulness of our proposed scheme was confirmed.

  12. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction

    NASA Astrophysics Data System (ADS)

    Sebari, Imane; He, Dong-Chen

    2013-05-01

    We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.

  13. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  14. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  15. Automatic Atlas Based Electron Density and Structure Contouring for MRI-based Prostate Radiation Therapy on the Cloud

    NASA Astrophysics Data System (ADS)

    Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.

    2014-03-01

    Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.

  16. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    SciTech Connect

    Schoot, A. J. A. J. van de Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A.; Hoogeman, M. S.; Chai, X.

    2014-03-15

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  17. Simplified white-light interferometric strain sensor based on HB fiber with automatic temperature compensation

    NASA Astrophysics Data System (ADS)

    Ma, Jianjun; Bock, Wojtek J.; Urbanczyk, Waclaw

    2003-02-01

    A simplified white-light interferometric strain sensor based on HB fibers with automatic temperature compensation is presented. A variety of experiments conducted within this study confirm an adequate temperature compensation could be achieved. Several different sensor structures were investigated during these experiments. One of the most important results shows that the interference contrast could significantly influence the measurement accuracy achievable by the system. Consequently, a 1% or even better absolute accuracy for short sensing fibers is possible if the contrast is enhanced to 0.5. A quasi-distributed cascade containing several discrete sensors with 0.5 contrast is also suggested.

  18. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    NASA Astrophysics Data System (ADS)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  19. Automatic Liver Segmentation on Volumetric CT Images Using Supervoxel-Based Graph Cuts.

    PubMed

    Wu, Weiwei; Zhou, Zhuhuang; Wu, Shuicai; Zhang, Yanhua

    2016-01-01

    Accurate segmentation of liver from abdominal CT scans is critical for computer-assisted diagnosis and therapy. Despite many years of research, automatic liver segmentation remains a challenging task. In this paper, a novel method was proposed for automatic delineation of liver on CT volume images using supervoxel-based graph cuts. To extract the liver volume of interest (VOI), the region of abdomen was firstly determined based on maximum intensity projection (MIP) and thresholding methods. Then, the patient-specific liver VOI was extracted from the region of abdomen by using a histogram-based adaptive thresholding method and morphological operations. The supervoxels of the liver VOI were generated using the simple linear iterative clustering (SLIC) method. The foreground/background seeds for graph cuts were generated on the largest liver slice, and the graph cuts algorithm was applied to the VOI supervoxels. Thirty abdominal CT images were used to evaluate the accuracy and efficiency of the proposed algorithm. Experimental results show that the proposed method can detect the liver accurately with significant reduction of processing time, especially when dealing with diseased liver cases. PMID:27127536

  20. Automatic classifier based on heart rate variability to identify fallers among hypertensive subjects

    PubMed Central

    Jovic, Alan; De Luca, Nicola; Pecchia, Leandro

    2015-01-01

    Accidental falls are a major problem of later life. Different technologies to predict falls have been investigated, but with limited success, mainly because of low specificity due to a high false positive rate. This Letter presents an automatic classifier based on heart rate variability (HRV) analysis with the goal to identify fallers automatically. HRV was used in this study as it is considered a good estimator of autonomic nervous system (ANS) states, which are responsible, among other things, for human balance control. Nominal 24 h electrocardiogram recordings from 168 cardiac patients (age 72 ± 8 years, 60 female), of which 47 were fallers, were investigated. Linear and nonlinear HRV properties were analysed in 30 min excerpts. Different data mining approaches were adopted and their performances were compared with a subject-based receiver operating characteristic analysis. The best performance was achieved by a hybrid algorithm, RUSBoost, integrated with feature selection method based on principal component analysis, which achieved satisfactory specificity and accuracy (80 and 72%, respectively), but low sensitivity (51%). These results suggested that ANS states causing falls could be reliably detected, but also that not all the falls were due to ANS states. PMID:26609412

  1. UAS-based automatic bird count of a common gull colony

    NASA Astrophysics Data System (ADS)

    Grenzdörffer, G. J.

    2013-08-01

    The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds.

  2. A Fully Automatic Method for Gridding Bright Field Images of Bead-Based Microarrays.

    PubMed

    Datta, Abhik; Wai-Kin Kong, Adams; Yow, Kin-Choong

    2016-07-01

    In this paper, a fully automatic method for gridding bright field images of bead-based microarrays is proposed. There have been numerous techniques developed for gridding fluorescence images of traditional spotted microarrays but to our best knowledge, no algorithm has yet been developed for gridding bright field images of bead-based microarrays. The proposed gridding method is designed for automatic quality control during fabrication and assembly of bead-based microarrays. The method begins by estimating the grid parameters using an evolutionary algorithm. This is followed by a grid-fitting step that rigidly aligns an ideal grid with the image. Finally, a grid refinement step deforms the ideal grid to better fit the image. The grid fitting and refinement are performed locally and the final grid is a nonlinear (piecewise affine) grid. To deal with extreme corruptions in the image, the initial grid parameter estimation and grid-fitting steps employ robust search techniques. The proposed method does not have any free parameters that need tuning. The method is capable of identifying the grid structure even in the presence of extreme amounts of artifacts and distortions. Evaluation results on a variety of images are presented. PMID:26011899

  3. Automatic Liver Segmentation on Volumetric CT Images Using Supervoxel-Based Graph Cuts

    PubMed Central

    Wu, Weiwei; Zhou, Zhuhuang; Wu, Shuicai; Zhang, Yanhua

    2016-01-01

    Accurate segmentation of liver from abdominal CT scans is critical for computer-assisted diagnosis and therapy. Despite many years of research, automatic liver segmentation remains a challenging task. In this paper, a novel method was proposed for automatic delineation of liver on CT volume images using supervoxel-based graph cuts. To extract the liver volume of interest (VOI), the region of abdomen was firstly determined based on maximum intensity projection (MIP) and thresholding methods. Then, the patient-specific liver VOI was extracted from the region of abdomen by using a histogram-based adaptive thresholding method and morphological operations. The supervoxels of the liver VOI were generated using the simple linear iterative clustering (SLIC) method. The foreground/background seeds for graph cuts were generated on the largest liver slice, and the graph cuts algorithm was applied to the VOI supervoxels. Thirty abdominal CT images were used to evaluate the accuracy and efficiency of the proposed algorithm. Experimental results show that the proposed method can detect the liver accurately with significant reduction of processing time, especially when dealing with diseased liver cases. PMID:27127536

  4. Automatic classifier based on heart rate variability to identify fallers among hypertensive subjects.

    PubMed

    Melillo, Paolo; Jovic, Alan; De Luca, Nicola; Pecchia, Leandro

    2015-08-01

    Accidental falls are a major problem of later life. Different technologies to predict falls have been investigated, but with limited success, mainly because of low specificity due to a high false positive rate. This Letter presents an automatic classifier based on heart rate variability (HRV) analysis with the goal to identify fallers automatically. HRV was used in this study as it is considered a good estimator of autonomic nervous system (ANS) states, which are responsible, among other things, for human balance control. Nominal 24 h electrocardiogram recordings from 168 cardiac patients (age 72 ± 8 years, 60 female), of which 47 were fallers, were investigated. Linear and nonlinear HRV properties were analysed in 30 min excerpts. Different data mining approaches were adopted and their performances were compared with a subject-based receiver operating characteristic analysis. The best performance was achieved by a hybrid algorithm, RUSBoost, integrated with feature selection method based on principal component analysis, which achieved satisfactory specificity and accuracy (80 and 72%, respectively), but low sensitivity (51%). These results suggested that ANS states causing falls could be reliably detected, but also that not all the falls were due to ANS states. PMID:26609412

  5. Automatic Extraction of IndoorGML Core Model from OpenStreetMap

    NASA Astrophysics Data System (ADS)

    Mirvahabi, S. S.; Abbaspour, R. A.

    2015-12-01

    Navigation has become an essential component of human life and a necessary component in many fields. Because of the increasing size and complexity of buildings, a unified data model for navigation analysis and exchange of information. IndoorGML describes an appropriate data model and XML schema of indoor spatial information that focuses on modelling indoor spaces. Collecting spatial data by professional and commercial providers often need to spend high cost and time, which is the major reason that VGI emerged. One of the most popular VGI projects is OpenStreetMap (OSM). In this paper, a new approach is proposed for the automatic generation of IndoorGML data core file from OSM data file. The output of this approach is the file of core data model that can be used alongside the navigation data model for navigation application of indoor space.

  6. Automatic segmentation and statistical shape modeling of the paranasal sinuses to estimate natural variations

    NASA Astrophysics Data System (ADS)

    Sinha, Ayushi; Leonard, Simon; Reiter, Austin; Ishii, Masaru; Taylor, Russell H.; Hager, Gregory D.

    2016-03-01

    We present an automatic segmentation and statistical shape modeling system for the paranasal sinuses which allows us to locate structures in and around the sinuses, as well as to observe the variability in these structures. This system involves deformably registering a given patient image to a manually segmented template image, and using the resulting deformation field to transfer labels from the template to the patient image. We use 3D snake splines to correct errors in this initial segmentation. Once we have several accurately segmented images, we build statistical shape models to observe the population mean and variance for each structure. These shape models are useful to us in several ways. Regular registration methods are insufficient to accurately register pre-operative computed tomography (CT) images with intra-operative endoscopy video of the sinuses. This is because of deformations that occur in structures containing erectile tissue. Our aim is to estimate these deformations using our shape models in order to improve video-CT registration, as well as to distinguish normal variations in anatomy from abnormal variations, and automatically detect and stage pathology. We can also compare the mean shapes and variances in different populations, such as different genders or ethnicities, in order to observe differences and similarities, as well as in different age groups in order to observe the developmental changes that occur in the sinuses.

  7. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI.

    PubMed

    Mazzurana, M; Sandrini, L; Vaccari, A; Malacarne, C; Cristoforetti, L; Pontalti, R

    2003-10-01

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity--even in the same tissue--reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight. PMID:14579858

  8. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  9. Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Fei, Baowei

    2012-02-01

    Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

  10. Automatic Training Sample Selection for a Multi-Evidence Based Crop Classification Approach

    NASA Astrophysics Data System (ADS)

    Chellasamy, M.; Ferre, P. A. Ty; Humlekrog Greve, M.

    2014-09-01

    An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples. The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed approach is named as ECRA (Ensemble based Cluster Refinement Approach). ECRA first automatically removes mislabeled samples and then selects the refined training samples in an iterative training-reclassification scheme. Mislabel removal is based on the expectation that mislabels in each class will be far from cluster centroid. However, this must be a soft constraint, especially when working with a hypothesis space that does not contain a good approximation of the targets classes. Difficulty in finding a good approximation often exists either due to less informative data or a large hypothesis space. Thus this approach uses the spectral, texture and indices domains in an ensemble framework to iteratively remove the mislabeled pixels from the crop clusters declared by the farmers. Once the clusters are refined, the selected border samples are used for final learning and the unknown samples are classified using the multi-evidence approach. The study is implemented with WorldView-2 multispectral imagery acquired for a study area containing 10 crop classes. The proposed

  11. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  12. Automatic decision support system based on SAR data for oil spill detection

    NASA Astrophysics Data System (ADS)

    Mera, David; Cotos, José M.; Varela-Pet, José; Rodríguez, Pablo G.; Caro, Andrés

    2014-11-01

    Global trade is mainly supported by maritime transport, which generates important pollution problems. Thus, effective surveillance and intervention means are necessary to ensure proper response to environmental emergencies. Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillages on the oceans surface. Several decision support systems have been based on this technology. This paper presents an automatic oil spill detection system based on SAR data which was developed on the basis of confirmed spillages and it was adapted to an important international shipping route off the Galician coast (northwest Iberian Peninsula). The system was supported by an adaptive segmentation process based on wind data as well as a shape oriented characterization algorithm. Moreover, two classifiers were developed and compared. Thus, image testing revealed up to 95.1% candidate labeling accuracy. Shared-memory parallel programming techniques were used to develop algorithms in order to improve above 25% of the system processing time.

  13. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    PubMed

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup. PMID:26471786

  14. Automatic evaluation of hypernasality based on a cleft palate speech database.

    PubMed

    He, Ling; Zhang, Jing; Liu, Qi; Yin, Heng; Lech, Margaret; Huang, Yunzhi

    2015-05-01

    The hypernasality is one of the most typical characteristics of cleft palate (CP) speech. The evaluation outcome of hypernasality grading decides the necessity of follow-up surgery. Currently, the evaluation of CP speech is carried out by experienced speech therapists. However, the result strongly depends on their clinical experience and subjective judgment. This work aims to propose an automatic evaluation system for hypernasality grading in CP speech. The database tested in this work is collected by the Hospital of Stomatology, Sichuan University, which has the largest number of CP patients in China. Based on the production process of hypernasality, source sound pulse and vocal tract filter features are presented. These features include pitch, the first and second energy amplified frequency bands, cepstrum based features, MFCC, short-time energy in the sub-bands features. These features combined with KNN classier are applied to automatically classify four grades of hypernasality: normal, mild, moderate and severe. The experiment results show that the proposed system achieves a good performance. The classification rates for four hypernasality grades reach up to 80.4%. The sensitivity of proposed features to the gender is also discussed. PMID:25814462

  15. Automatic registration of laser-scanned point clouds based on planar features

    NASA Astrophysics Data System (ADS)

    Li, Minglei; Gao, Xinyuan; Wang, Li; Li, Guangyun

    2016-03-01

    Automatic multistation registration of laser-scanned point clouds is a research hotspot in laser-scanned point clouds registration. Some targets such as common buildings have plenty of planar features, and using these features as constraints properly can bring about high accuracy registration results. Starting from this, a new automatic multistation registration method using homologous planar features of two scan stations was proposed. In order to recognize planes from different scan stations and get plane equations in corresponding scan station coordinate systems, k-means dynamic clustering method was improved to be adaptive and robust. And to match the homologous planes of the two scan stations, two different procedures were proposed, respectively, one of which was based on the "common" relationship between planes and the other referenced RANSAC algorithm. And the transformation parameters of the two scan station coordinate systems were calculated after homologous plane matching. Finally, the transformation parameters based on the optimal match of planes was adopted as the final registration result. Comparing with ICP algorithm in experiment, the method is proved to be effective.

  16. Automatic Road Extraction Based on Integration of High Resolution LIDAR and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Rahimi, S.; Arefi, H.; Bahmanyar, R.

    2015-12-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads' precise delineation has been less attended to. In this paper, we propose a new unsupervised fully-automatic road extraction method, based on the integration of the high resolution LiDAR and aerial images of a scene using Principal Component Analysis (PCA). This method discriminates the existing roads in a scene; and then precisely delineates them. Hough transform is then applied to the integrated information to extract straight lines; which are further used to segment the scene and discriminate the existing roads. The roads' edges are then precisely localized using a projection-based technique, and the round corners are further refined. Experimental results demonstrate that our proposed method extracts and delineates the roads with a high accuracy.

  17. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  18. A semi-automatic method of generating subject-specific pediatric head finite element models for impact dynamic responses to head injury.

    PubMed

    Li, Zhigang; Han, Xiaoqiang; Ge, Hao; Ma, Chunsheng

    2016-07-01

    To account for the effects of head realistic morphological feature variation on the impact dynamic responses to head injury, it is necessary to develop multiple subject-specific pediatric head finite element (FE) models based on computed tomography (CT) or magnetic resonance imaging (MRI) scans. However, traditional manual model development is very time-consuming. In this study, a new automatic method was developed to extract anatomical points from pediatric head CT scans to represent pediatric head morphological features (head size/shape, skull thickness, and suture/fontanel width). Subsequently, a geometry-adaptive mesh morphing method based on radial basis function was developed that can automatically morph a baseline pediatric head FE model into target FE models with geometries corresponding to the extracted head morphological features. In the end, five subject-specific head FE models of approximately 6-month-old (6MO) were automatically generated using the developed method. These validated models were employed to investigate differences in the head dynamic responses among subjects with different head morphologies. The results show that variations in head morphological features have a relatively large effect on pediatric head dynamic response. The results of this study indicate that pediatric head morphological variation had better be taken into account when reconstructing pediatric head injury due to traffic/fall accidents or child abuses using computational models as well as predicting head injury risk for children with obvious difference in head size and morphologies. PMID:27058003

  19. Clinical evaluation of semi-automatic landmark-based lesion tracking software for CT-scans

    PubMed Central

    2014-01-01

    Background To evaluate a semi-automatic landmark-based lesion tracking software enabling navigation between RECIST lesions in baseline and follow-up CT-scans. Methods The software automatically detects 44 stable anatomical landmarks in each thoraco/abdominal/pelvic CT-scan, sets up a patient specific coordinate-system and cross-links the coordinate-systems of consecutive CT-scans. Accuracy of the software was evaluated on 96 RECIST lesions (target- and non-target lesions) in baseline and follow-up CT-scans of 32 oncologic patients (64 CT-scans). Patients had to present at least one thoracic, one abdominal and one pelvic RECIST lesion. Three radiologists determined the deviation between lesions’ centre and the software’s navigation result in consensus. Results The initial mean runtime of the system to synchronize baseline and follow-up examinations was 19.4 ± 1.2 seconds, with subsequent navigation to corresponding RECIST lesions facilitating in real-time. Mean vector length of the deviations between lesions’ centre and the semi-automatic navigation result was 10.2 ± 5.1 mm without a substantial systematic error in any direction. Mean deviation in the cranio-caudal dimension was 5.4 ± 4.0 mm, in the lateral dimension 5.2 ± 3.9 mm and in the ventro-dorsal dimension 5.3 ± 4.0 mm. Conclusion The investigated software accurately and reliably navigates between lesions in consecutive CT-scans in real-time, potentially accelerating and facilitating cancer staging. PMID:25609496

  20. Automatic machine learning based prediction of cardiovascular events in lung cancer screening data

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; de Jong, Pim A.; Wolterink, Jelmer M.; Vliegenthart, Rozemarijn; Wielingen, Geoffrey V. F.; Viergever, Max A.; Išgum, Ivana

    2015-03-01

    Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.

  1. Automatic NMR-based identification of chemical reaction types in mixtures of co-occurring reactions.

    PubMed

    Latino, Diogo A R S; Aires-de-Sousa, João

    2014-01-01

    The combination of chemoinformatics approaches with NMR techniques and the increasing availability of data allow the resolution of problems far beyond the original application of NMR in structure elucidation/verification. The diversity of applications can range from process monitoring, metabolic profiling, authentication of products, to quality control. An application related to the automatic analysis of complex mixtures concerns mixtures of chemical reactions. We encoded mixtures of chemical reactions with the difference between the (1)H NMR spectra of the products and the reactants. All the signals arising from all the reactants of the co-occurring reactions were taken together (a simulated spectrum of the mixture of reactants) and the same was done for products. The difference spectrum is taken as the representation of the mixture of chemical reactions. A data set of 181 chemical reactions was used, each reaction manually assigned to one of 6 types. From this dataset, we simulated mixtures where two reactions of different types would occur simultaneously. Automatic learning methods were trained to classify the reactions occurring in a mixture from the (1)H NMR-based descriptor of the mixture. Unsupervised learning methods (self-organizing maps) produced a reasonable clustering of the mixtures by reaction type, and allowed the correct classification of 80% and 63% of the mixtures in two independent test sets of different similarity to the training set. With random forests (RF), the percentage of correct classifications was increased to 99% and 80% for the same test sets. The RF probability associated to the predictions yielded a robust indication of their reliability. This study demonstrates the possibility of applying machine learning methods to automatically identify types of co-occurring chemical reactions from NMR data. Using no explicit structural information about the reactions participants, reaction elucidation is performed without structure elucidation of

  2. Automatic NMR-Based Identification of Chemical Reaction Types in Mixtures of Co-Occurring Reactions

    PubMed Central

    Latino, Diogo A. R. S.; Aires-de-Sousa, João

    2014-01-01

    The combination of chemoinformatics approaches with NMR techniques and the increasing availability of data allow the resolution of problems far beyond the original application of NMR in structure elucidation/verification. The diversity of applications can range from process monitoring, metabolic profiling, authentication of products, to quality control. An application related to the automatic analysis of complex mixtures concerns mixtures of chemical reactions. We encoded mixtures of chemical reactions with the difference between the 1H NMR spectra of the products and the reactants. All the signals arising from all the reactants of the co-occurring reactions were taken together (a simulated spectrum of the mixture of reactants) and the same was done for products. The difference spectrum is taken as the representation of the mixture of chemical reactions. A data set of 181 chemical reactions was used, each reaction manually assigned to one of 6 types. From this dataset, we simulated mixtures where two reactions of different types would occur simultaneously. Automatic learning methods were trained to classify the reactions occurring in a mixture from the 1H NMR-based descriptor of the mixture. Unsupervised learning methods (self-organizing maps) produced a reasonable clustering of the mixtures by reaction type, and allowed the correct classification of 80% and 63% of the mixtures in two independent test sets of different similarity to the training set. With random forests (RF), the percentage of correct classifications was increased to 99% and 80% for the same test sets. The RF probability associated to the predictions yielded a robust indication of their reliability. This study demonstrates the possibility of applying machine learning methods to automatically identify types of co-occurring chemical reactions from NMR data. Using no explicit structural information about the reactions participants, reaction elucidation is performed without structure elucidation of the

  3. Automatic processing and modeling of GPR data for pavement thickness and properties

    NASA Astrophysics Data System (ADS)

    Olhoeft, Gary R.; Smith, Stanley S., III

    2000-04-01

    A GSSI SIR-8 with 1 GHz air-launched horn antennas has been modified to acquire data from a moving vehicle. Algorithms have been developed to acquire the data, and to automatically calibrate, position, process, and full waveform model it without operator intervention. Vehicle suspension system bounce is automatically compensated (for varying antenna height). Multiple scans are modeled by full waveform inversion that is remarkably robust and relatively insensitive to noise. Statistical parameters and histograms are generated for the thickness and dielectric permittivity of concrete or asphalt pavements. The statistical uncertainty with which the thickness is determined is given with each thickness measurement, along with the dielectric permittivity of the pavement material and of the subgrade material at each location. Permittivities are then converted into equivalent density and water content. Typical statistical uncertainties in thickness are better than 0.4 cm in 20 cm thick pavement. On a Pentium laptop computer, the data may be processed and modeled to have cross-sectional images and computed pavement thickness displayed in real time at highway speeds.

  4. Automatic calibration of a parsimonious ecohydrological model in a sparse basin using the spatio-temporal variation of the NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2016-04-01

    Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for searching new sources of information such as satellite data. The advantages of the use of satellite data in dry regions has been deeply demonstrated and studied. But, the use of this kind of data forces to introduce the concept of spatio-temporal information. In this context, we have to deal with the fact that there is a lack in terms of statistics and methodologies to incorporate the spatio-temporal data during the calibration and validation processes. This research wants to be a contribution in that sense. The used ecohydrological model was calibrated in the Upper Ewaso river basin in Kenya only using NDVI (Normalized Difference Vegetation Index) data from MODIS. An automatic calibration methodology based on Singular Value Decomposition techniques was proposed in order to calibrate the model taking into account the temporal variation and, also, the spatial pattern of the observed NDVI and the simulated LAI. The obtained results have demonstrated: (1) the satellite data is an extraordinary useful tool of information and it can be used to implement ecohydrological models in dry regions; (2) the proposed model calibrated only using satellite data is able to reproduce the vegetation dynamics (in time and in space) and, also, the observed discharge at the outlet point; and (3) the proposed automatic calibration methodology works satisfactorily and it includes spatio-temporal data, in other words, it takes into account the temporal variation and the spatial pattern of the analyzed data.

  5. Automatic construction of an anatomical coordinate system for three-dimensional bone models of the lower extremities--pelvis, femur, and tibia.

    PubMed

    Kai, Shin; Sato, Takashi; Koga, Yoshio; Omori, Go; Kobayashi, Koichi; Sakamoto, Makoto; Tanabe, Yuji

    2014-03-21

    Automated methods for constructing patient-specific anatomical coordinate systems (ACSs) for the pelvis, femur and tibia were developed based on the bony geometry of each, derived from computed tomography (CT). The methods used principal axes of inertia, principal component analysis (PCA), cross-sectional area, and spherical and ellipsoidal surface fitting to eliminate the influence of rater's bias on reference landmark selection. Automatic ACSs for the pelvis, femur, and tibia were successfully constructed on each 3D bone model using the developed algorithm. All constructions were performed within 30s; furthermore, between- and within- rater errors were zero for a given CT-based 3D bone model, owing to the automated nature of the algorithm. ACSs recommended by the International Society of Biomechanics (ISB) were compared with the automatically constructed ACS, to evaluate the potential differences caused by the selection of the coordinate system. The pelvis ACSs constructed using the ISB-recommended system were tilted significantly more anteriorly than those constructed automatically (range, 9.6-18.8°). There were no significant differences between the two methods for the femur. For the tibia, significant differences were found in the direction of the anteroposterior axis; the anteroposterior axes identified by ISB were more external than those in the automatic ACS (range, 17.5-25.0°). PMID:24456665

  6. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  7. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  8. Markov random field based automatic alignment for low SNR imagesfor cryo electron tomography

    SciTech Connect

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R.; Elidan, Gal; Horowitz, Mark

    2007-07-21

    We present a method for automatic full precision alignmentof the images in a tomographic tilt series. Full-precision automaticalignment of cryo electron microscopy images has remained a difficultchallenge to date, due to the limited electron dose and low imagecontrast. These facts lead to poor signal to noise ratio (SNR) in theimages, which causes automatic feature trackers to generate errors, evenwith high contrast gold particles as fiducial features. To enable fullyautomatic alignment for full-precision reconstructions, we frame theproblem probabilistically as finding the most likely particle tracksgiven a set of noisy images, using contextual information to make thesolution more robust to the noise in each image. To solve this maximumlikelihood problem, we use Markov Random Fields (MRF) to establish thecorrespondence of features in alignment and robust optimization forprojection model estimation. The resultingalgorithm, called RobustAlignment and Projection Estimation for Tomographic Reconstruction, orRAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as goodas the manual approach by an expert user. We are able to automaticallymap complete and partial marker trajectories and thus obtain highlyaccurate image alignment. Our method has been applied to challenging cryoelectron tomographic datasets with low SNR from intact bacterial cells,as well as several plastic section and x-ray datasets.

  9. Dual-Model Automatic Detection of Nerve-Fibres in Corneal Confocal Microscopy Images

    PubMed Central

    Dabbah, M.A.; Graham, J.; Petropoulos, I.; Tavakoli, M.; Malik, R.A.

    2011-01-01

    Corneal Confocal Microscopy (CCM) imaging is a non-invasive surrogate of detecting, quantifying and monitoring diabetic peripheral neuropathy. This paper presents an automated method for detecting nerve-fibres from CCM images using a dual-model detection algorithm and compares the performance to well-established texture and feature detection methods. The algorithm comprises two separate models, one for the background and another for the foreground (nerve-fibres), which work interactively. Our evaluation shows significant improvement (p ≈ 0) in both error rate and signal-to-noise ratio of this model over the competitor methods. The automatic method is also evaluated in comparison with manual ground truth analysis in assessing diabetic neuropathy on the basis of nerve-fibre length, and shows a strong correlation (r = 0.92). Both analyses significantly separate diabetic patients from control subjects (p ≈ 0). PMID:20879244

  10. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  11. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    NASA Astrophysics Data System (ADS)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  12. Design of Radio Frequency Link in Automatic Test System for Multimode Mobile Communication Base Station

    NASA Astrophysics Data System (ADS)

    Zhang, Weipeng

    2015-12-01

    A modularized design for the radio frequency (RF) link in automatic test system of multimode mobile communication base station is presented, considering also the characteristics of wireless communication indices and composition of signals of base stations. The test link is divided into general module, time division duplex (TDD) module, module of spurious noise filter, module of downlink intermodulation, module of uplink intermodulation and uplink block module. The composition of modules and link functions are defined, and the interfaces of the general module and the module of spurious noise filter are described. Finally, the estimated gain budget of the test link is presented. It is verified by experiments that the system is reliable and the test efficiency is improved.

  13. Automatic segmentation and classification of human brain image based on a fuzzy brain atlas

    NASA Astrophysics Data System (ADS)

    Tan, Ou; Jia, Chunguang; Duan, Huilong; Lu, Weixue

    1998-09-01

    It is difficult to automatically segment and classify tomograph images of actual patient's brain. Therefore, many interactive operations are performed. It is very time consuming and its precision is much depended on the user. In this paper, we combine a brain atlas and 3D fuzzy image segmentation into the image matching. It can not only find out the precise boundary of anatomic structure but also save time of the interactive operation. At first, the anatomic information of atlas is mapped into tomograph images of actual brain with a two step image matching method. Then, based on the mapping result, a 3D fuzzy structure mask is calculated. With the fuzzy information of anatomic structure, a new method of fuzzy clustering based on genetic algorithm is used to segment and classify the real brain image. There is only a minimum requirement of interaction in the whole process, including removing the skull and selecting some intrinsic point pairs.

  14. Towards an automatic statistical model for seasonal precipitation prediction and its application to Central and South Asian headwater catchments

    NASA Astrophysics Data System (ADS)

    Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could

  15. Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis

    NASA Astrophysics Data System (ADS)

    Vos, P. C.; Barentsz, J. O.; Karssemeijer, N.; Huisman, H. J.

    2012-03-01

    In this paper, a fully automatic computer-aided detection (CAD) method is proposed for the detection of prostate cancer. The CAD method consists of multiple sequential steps in order to detect locations that are suspicious for prostate cancer. In the initial stage, a voxel classification is performed using a Hessian-based blob detection algorithm at multiple scales on an apparent diffusion coefficient map. Next, a parametric multi-object segmentation method is applied and the resulting segmentation is used as a mask to restrict the candidate detection to the prostate. The remaining candidates are characterized by performing histogram analysis on multiparametric MR images. The resulting feature set is summarized into a malignancy likelihood by a supervised classifier in a two-stage classification approach. The detection performance for prostate cancer was tested on a screening population of 200 consecutive patients and evaluated using the free response operating characteristic methodology. The results show that the CAD method obtained sensitivities of 0.41, 0.65 and 0.74 at false positive (FP) levels of 1, 3 and 5 per patient, respectively. In conclusion, this study showed that it is feasible to automatically detect prostate cancer at a FP rate lower than systematic biopsy. The CAD method may assist the radiologist to detect prostate cancer locations and could potentially guide biopsy towards the most aggressive part of the tumour.

  16. Automatic decomposition of a complex hologram based on the virtual diffraction plane framework

    NASA Astrophysics Data System (ADS)

    Jiao, A. S. M.; Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Lee, C.-C.; Lam, Y. K.

    2014-07-01

    Holography is a technique for capturing the hologram of a three-dimensional scene. In many applications, it is often pertinent to retain specific items of interest in the hologram, rather than retaining the full information, which may cause distraction in the analytical process that follows. For a real optical image that is captured with a camera or scanner, this process can be realized by applying image segmentation algorithms to decompose an image into its constituent entities. However, because it is different from an optical image, classic image segmentation methods cannot be applied directly to a hologram, as each pixel in the hologram carries holistic, rather than local, information of the object scene. In this paper, we propose a method to perform automatic decomposition of a complex hologram based on a recently proposed technique called the virtual diffraction plane (VDP) framework. Briefly, a complex hologram is back-propagated to a hypothetical plane known as the VDP. Next, the image on the VDP is automatically decomposed, through the use of the segmentation on the magnitude of the VDP image, into multiple sub-VDP images, each representing the diffracted waves of an isolated entity in the scene. Finally, each sub-VDP image is reverted back to a hologram. As such, a complex hologram can be decomposed into a plurality of subholograms, each representing a discrete object in the scene. We have demonstrated the successful performance of our proposed method by decomposing a complex hologram that is captured through the optical scanning holography (OSH) technique.

  17. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  18. Toward a multi-sensor-based approach to automatic text classification

    SciTech Connect

    Dasigi, V.R.; Mann, R.C.

    1995-10-01

    Many automatic text indexing and retrieval methods use a term-document matrix that is automatically derived from the text in question. Latent Semantic Indexing is a method, recently proposed in the Information Retrieval (IR) literature, for approximating a large and sparse term-document matrix with a relatively small number of factors, and is based on a solid mathematical foundation. LSI appears to be quite useful in the problem of text information retrieval, rather than text classification. In this report, we outline a method that attempts to combine the strength of the LSI method with that of neural networks, in addressing the problem of text classification. In doing so, we also indicate ways to improve performance by adding additional {open_quotes}logical sensors{close_quotes} to the neural network, something that is hard to do with the LSI method when employed by itself. The various programs that can be used in testing the system with TIPSTER data set are described. Preliminary results are summarized, but much work remains to be done.

  19. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; van Huis, Jasper R.; Dijk, Judith; van Rest, Jeroen H. C.

    2014-10-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snatch is a subtle action, and because collaboration is complex social behavior. We carried out an experiment with more than 20 validated pickpocket incidents. We used a top-down approach to translate expert knowledge in features and rules, and a bottom-up approach to learn discriminating patterns with a classifier. The classifier was used to separate the pickpockets from normal passers-by who are shopping in the mall. We performed a cross validation to train and evaluate our system. In this paper, we describe our method, identify the most valuable features, and analyze the results that were obtained in the experiment. We estimate the quality of these features and the performance of automatic detection of (collaborating) pickpockets. The results show that many of the pickpockets can be detected at a low false alarm rate.

  20. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  1. Automatic representation of urban terrain models for simulations on the example of VBS2

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Häufel, Gisela; Solbrig, Peter; Wernerus, Peter

    2014-10-01

    Virtual simulations have been on the rise together with the fast progress of rendering engines and graphics hardware. Especially in military applications, offensive actions in modern peace-keeping missions have to be quick, firm and precise, especially under the conditions of asymmetric warfare, non-cooperative urban terrain and rapidly developing situations. Going through the mission in simulation can prepare the minds of soldiers and leaders, increase selfconfidence and tactical awareness, and finally save lives. This work is dedicated to illustrate the potential and limitations of integration of semantic urban terrain models into a simulation. Our system of choice is Virtual Battle Space 2, a simulation system created by Bohemia Interactive System. The topographic object types that we are able to export into this simulation engine are either results of the sensor data evaluation (building, trees, grass, and ground), which is done fully-automatically, or entities obtained from publicly available sources (streets and water-areas), which can be converted into the system-proper format with a few mouse clicks. The focus of this work lies in integrating of information about building façades into the simulation. We are inspired by state-of the art methods that allow for automatic extraction of doors and windows in laser point clouds captured from building walls and thus increase the level of details of building models. As a consequence, it is important to simulate these animationable entities. Doing so, we are able to make accessible some of the buildings in the simulation.

  2. Automatic laboratory-based strategy to improve the diagnosis of type 2 diabetes in primary care

    PubMed Central

    Salinas, Maria; López-Garrigós, Maite; Flores, Emilio; Leiva-Salinas, Maria; Lugo, Javier; Pomares, Francisco J; Asencio, Alberto; Ahumada, Miguel; Leiva-Salinas, Carlos

    2016-01-01

    Introduction To study the pre-design and success of a strategy based on the addition of hemoglobin A1c (HbA1c) in the blood samples of certain primary care patients to detect new cases of type 2 diabetes. Materials and methods In a first step, we retrospectively calculated the number of HbA1c that would have been measured in one year if HbA1c would have been processed, according to the guidelines of the American Diabetes Association (ADA). Based on those results we decided to prospectively measure HbA1c in every primary care patient above 45 years, with no HbA1c in the previous 3 years, and glucose concentration between 5.6-6.9 mmol/L, during an 18 months period. We calculated the number of HbA1c that were automatically added by the LIS based on our strategy, we evaluated the medical record of such subjects to confirm whether type 2 diabetes was finally confirmed, and we calculated the cost of our intervention. Results In a first stage, according to the guidelines, Hb1Ac should have been added to the blood samples of 13,085 patients, resulting in a cost of 14,973€. In the prospective study, the laboratory added Hb1Ac to 2092 patients, leading to an expense of 2393€. 314 patients had an HbA1c value ≥ 6.5% (48 mmol/mol). 82 were finally diagnosed as type 2 diabetes; 28 thanks to our strategy, with an individual cost of 85.4€; and 54 due to the request of HbA1c by the general practitioners (GPs), with a cost of 47.5€. Conclusion The automatic laboratory-based strategy detected patients with type 2 diabetes in primary care, at a cost of 85.4€ per new case. PMID:26981026

  3. Comparison of landmark-based and automatic methods for cortical surface registration.

    PubMed

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David W; Bernstein, Lynne E; Damasio, Hanna; Leahy, Richard M

    2010-02-01

    Group analysis of structure or function in cerebral cortex typically involves, as a first step, the alignment of cortices. A surface-based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark-based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  4. Automatic generation of predictive dynamic models reveals nuclear phosphorylation as the key Msn2 control mechanism.

    PubMed

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-05-28

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. We describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model and automatically generates a set of simpler models compatible with observational data. As a proof of principle, we analyzed the dynamic control of the transcription factor Msn2 in Saccharomyces cerevisiae, specifically the short-term mechanisms mediating the cells' recovery after release from starvation stress. Our method determined that 12 of 192 possible models were compatible with available Msn2 localization data. Iterations between model predictions and rationally designed phosphoproteomics and imaging experiments identified a single-circuit topology with a relative probability of 99% among the 192 models. Model analysis revealed that the coupling of dynamic phenomena in Msn2 phosphorylation and transport could lead to efficient stress response signaling by establishing a rate-of-change sensor. Similar principles could apply to mammalian stress response pathways. Systematic construction of dynamic models may yield detailed insight into nonobvious molecular mechanisms. PMID:23716718

  5. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    NASA Astrophysics Data System (ADS)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  6. Group-wise automatic mesh-based analysis of cortical thickness

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Cody Hazlett, Heather; Niethammer, Marc; Oguz, Ipek; Cates, Joshua; Whitaker, Ross; Piven, Joseph; Styner, Martin

    2011-03-01

    The analysis of neuroimaging data from pediatric populations presents several challenges. There are normal variations in brain shape from infancy to adulthood and normal developmental changes related to tissue maturation. Measurement of cortical thickness is one important way to analyze such developmental tissue changes. We developed a novel framework that allows group-wise automatic mesh-based analysis of cortical thickness. Our approach is divided into four main parts. First an individual pre-processing pipeline is applied on each subject to create genus-zero inflated white matter cortical surfaces with cortical thickness measurements. The second part performs an entropy-based group-wise shape correspondence on these meshes using a particle system, which establishes a trade-off between an even sampling of the cortical surfaces and the similarity of corresponding points across the population using sulcal depth information and spatial proximity. A novel automatic initial particle sampling is performed using a matched 98-lobe parcellation map prior to a particle-splitting phase. Third, corresponding re-sampled surfaces are computed with interpolated cortical thickness measurements, which are finally analyzed via a statistical vertex-wise analysis module. This framework consists of a pipeline of automated 3D Slicer compatible modules. It has been tested on a small pediatric dataset and incorporated in an open-source C++ based high-level module called GAMBIT. GAMBIT's setup allows efficient batch processing, grid computing and quality control. The current research focuses on the use of an average template for correspondence and surface re-sampling, as well as thorough validation of the framework and its application to clinical pediatric studies.

  7. Automatic image-to-world registration based on x-ray projections in cone-beam CT-guided interventions

    PubMed Central

    Hamming, N. M.; Daly, M. J.; Irish, J. C.; Siewerdsen, J. H.

    2009-01-01

    in precision was observed—specifically, the standard deviation in TRE was 0.2 mm for the automatic technique versus 0.34 mm for the manual technique (p=0.001). The projection-based automatic registration technique demonstrates accuracy and reproducibility equivalent or superior to the conventional manual technique for both neurosurgical and head and neck marker configurations. Use of this method with C-arm CBCT eliminates the burden of manual registration on surgical workflow by providing automatic registration of surgical tracking in 3D images within ∼20 s of acquisition, with registration automatically updated with each CBCT scan. The automatic registration method is undergoing integration in ongoing clinical trials of intraoperative CBCT-guided head and neck surgery. PMID:19544799

  8. A stochastic model for automatic extraction of 3D neuronal morphology.

    PubMed

    Basu, Sreetama; Kulikova, Maria; Zhizhina, Elena; Ooi, Wei Tsang; Racoceanu, Daniel

    2013-01-01

    Tubular structures are frequently encountered in bio-medical images. The center-lines of these tubules provide an accurate representation of the topology of the structures. We introduce a stochastic Marked Point Process framework for fully automatic extraction of tubular structures requiring no user interaction or seed points for initialization. Our Marked Point Process model enables unsupervised network extraction by fitting a configuration of objects with globally optimal associated energy to the centreline of the arbors. For this purpose we propose special configurations of marked objects and an energy function well adapted for detection of 3D tubular branches. The optimization of the energy function is achieved by a stochastic, discrete-time multiple birth and death dynamics. Our method finds the centreline, local width and orientation of neuronal arbors and identifies critical nodes like bifurcations and terminals. The proposed model is tested on 3D light microscopy images from the DIADEM data set with promising results. PMID:24505691

  9. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  10. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  11. Automatic and continuous landslide monitoring: the Rotolon Web-based platform

    NASA Astrophysics Data System (ADS)

    Frigerio, Simone; Schenato, Luca; Mantovani, Matteo; Bossi, Giulia; Marcato, Gianluca; Cavalli, Marco; Pasuto, Alessandro

    2013-04-01

    Mount Rotolon (Eastern Italian Alps) is affected by a complex landslide that, since 1985, is threatening the nearby village of Recoaro Terme. The first written proof of a landslide occurrence dated back to 1798. After the last re-activation on November 2010 (637 mm of intense rainfall recorded in the 12 days prior the event), a mass of approximately 320.000 m3 detached from the south flank of Mount Rotolon and evolved into a fast debris flow that ran for about 3 km along the stream bed. A real-time monitoring system was required to detect early indication of rapid movements, potentially saving lives and property. A web-based platform for automatic and continuous monitoring was designed as a first step in the implementation of an early-warning system. Measurements collected by the automated geotechnical and topographic instrumentation, deployed over the landslide body, are gathered in a central box station. After the calibration process, they are transmitted by web services on a local server, where graphs, maps, reports and alert announcement are automatically generated and updated. All the processed information are available by web browser with different access rights. The web environment provides the following advantages: 1) data is collected from different data sources and matched on a single server-side frame 2) a remote user-interface allows regular technical maintenance and direct access to the instruments 3) data management system is synchronized and automatically tested 4) a graphical user interface on browser provides a user-friendly tool for decision-makers to interact with a system continuously updated. On this site two monitoring systems are actually on course: 1) GB-InSAR radar interferometer (University of Florence - Department of Earth Science) and 2) Automated Total Station (ATS) combined with extensometers network in a Web-based solution (CNR-IRPI Padova). This work deals with details on methodology, services and techniques adopted for the second

  12. An automatic method to determine cutoff frequency based on image power spectrum

    SciTech Connect

    Beis, J.S.; Celler, A.; Barney, J.S.

    1995-12-01

    The authors present an algorithm for automatically choosing filter cutoff frequency (F{sub c}) using the power spectrum of the projections. The method is based on the assumption that the expectation of the image power spectrum is the sum of the expectation of the blurred object power spectrum (dominant at low frequencies) plus a constant value due to Poisson noise. By considering the discrete components of the noise-dominated high-frequency spectrum as a Gaussian distribution N({mu},{sigma}), the Student t-test determines F{sub c} as the highest frequency for which the image frequency components are unlikely to be drawn from N ({mu},{sigma}). The method is general and can be applied to any filter. In this work, the authors tested the approach using the Metz restoration filter on simulated, phantom, and patient data with good results. Quantitative performance of the technique was evaluated by plotting recovery coefficient (RC) versus NMSE of reconstructed images.

  13. Automatic Kappa Angle Estimation for Air Photos Based on Phase Only Correlation

    NASA Astrophysics Data System (ADS)

    Xiong, Z.; Stanley, D.; Xin, Y.

    2016-06-01

    The approximate value of exterior orientation parameters is needed for air photo bundle adjustment. Usually the air borne GPS/IMU can provide the initial value for the camera position and attitude angle. However, in some cases, the camera's attitude angle is not available due to lack of IMU or other reasons. In this case, the kappa angle needs to be estimated for each photo before bundle adjustment. The kappa angle can be obtained from the Ground Control Points (GCPs) in the photo. Unfortunately it is not the case that enough GCPs are always available. In order to overcome this problem, an algorithm is developed to automatically estimate the kappa angle for air photos based on phase only correlation technique. This function has been embedded in PCI software. Extensive experiments show that this algorithm is fast, reliable, and stable.

  14. A Smartphone-Based Automatic Diagnosis System for Facial Nerve Palsy

    PubMed Central

    Kim, Hyun Seok; Kim, So Young; Kim, Young Ho; Park, Kwang Suk

    2015-01-01

    Facial nerve palsy induces a weakness or loss of facial expression through damage of the facial nerve. A quantitative and reliable assessment system for facial nerve palsy is required for both patients and clinicians. In this study, we propose a rapid and portable smartphone-based automatic diagnosis system that discriminates facial nerve palsy from normal subjects. Facial landmarks are localized and tracked by an incremental parallel cascade of the linear regression method. An asymmetry index is computed using the displacement ratio between the left and right side of the forehead and mouth regions during three motions: resting, raising eye-brow and smiling. To classify facial nerve palsy, we used Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM), and Leave-one-out Cross Validation (LOOCV) with 36 subjects. The classification accuracy rate was 88.9%. PMID:26506352

  15. An Automatic Impact-based Delamination Detection System for Concrete Bridge Decks

    SciTech Connect

    Zhang, Gang; Harichandran, Ronald S.; Ramuhalli, Pradeep

    2012-01-02

    Delamination of concrete bridge decks is a commonly observed distress in corrosive environments. In traditional acoustic inspection methods, delamination is assessed by the "hollowness" of the sound created by impacting the bridge deck with a hammer or bar or by dragging a chain where the signals are often contaminated by ambient traffic noise and the detection is highly subjective. In the proposed method, a modified version of independent component analysis (ICA) is used to filter the traffic noise. To eliminate subjectivity, Mel-frequency cepstral coefficients (MFCC) are used as features for detection and the delamination is detected by a radial basis function (RBF) neural network. Results from both experimental and field data suggest that the proposed methods id noise robust and has satisfactory performance. The methods can also detect the delamination of repair patches and concrete below the repair patches. The algorithms were incorporated into an automatic impact-bases delamination detection (AIDD) system for field application.

  16. Automatic Carbon Dioxide-Methane Gas Sensor Based on the Solubility of Gases in Water

    PubMed Central

    Cadena-Pereda, Raúl O.; Rivera-Muñoz, Eric M.; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J.; Anaya-Rivera, Ely K.

    2012-01-01

    Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0–100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible. PMID:23112626

  17. Automatic intraductal breast carcinoma classification using a neural network-based recognition system.

    PubMed

    Reigosa, A; Hernández, L; Torrealba, V; Barrios, V; Montilla, G; Bosnjak, A; Araez, M; Turiaf, M; Leon, A

    1998-07-01

    A contour-based automatic recognition system was applied to classify intraductal breast carcinoma into high nuclear grade and low nuclear grade in a digitized histologic image. The image discriminating characteristics were selected by their invariability condition to rotation and translation. They were acquired from cellular contours information. The totally interconnected multilayer perceptron network architecture was selected, and it was trained with the error back propagation algorithm. Forty cases were analyzed by the system and the diagnoses were compared with that of pathologist consensus, obtaining agreement in 97.5% (p < .00001 of cases). The system may become a very useful tool for the pathologist in the definitive classification of intraductal carcinoma. PMID:21223442

  18. A smartphone-based automatic diagnosis system for facial nerve palsy.

    PubMed

    Kim, Hyun Seok; Kim, So Young; Kim, Young Ho; Park, Kwang Suk

    2015-01-01

    Facial nerve palsy induces a weakness or loss of facial expression through damage of the facial nerve. A quantitative and reliable assessment system for facial nerve palsy is required for both patients and clinicians. In this study, we propose a rapid and portable smartphone-based automatic diagnosis system that discriminates facial nerve palsy from normal subjects. Facial landmarks are localized and tracked by an incremental parallel cascade of the linear regression method. An asymmetry index is computed using the displacement ratio between the left and right side of the forehead and mouth regions during three motions: resting, raising eye-brow and smiling. To classify facial nerve palsy, we used Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM), and Leave-one-out Cross Validation (LOOCV) with 36 subjects. The classification accuracy rate was 88.9%. PMID:26506352

  19. A semi-automatic web based tool for the selection of research projects reviewers.

    PubMed

    Pupella, Valeria; Monteverde, Maria Eugenia; Lombardo, Claudio; Belardelli, Filippo; Giacomini, Mauro

    2014-01-01

    The correct evaluation of research proposals continues today to be problematic, and in many cases, grants and fellowships are subjected to this type of assessment. A web based semi-automatic tool to help in the selection of reviewers was developed. The core of the proposed system is the matching of the MeSH Descriptors of the publications submitted by the reviewers (for their accreditation) and the Descriptor linked to the research keywords, which were selected. Moreover, a citation related index was further calculated and adopted in order to discard not suitable reviewers. This tool was used as a support in a web site for the evaluation of candidates applying for a fellowship in the oncology field. PMID:25160328

  20. Methods and automatic procedures for processing images based on blind evaluation of noise type and characteristics

    NASA Astrophysics Data System (ADS)

    Lukin, Vladimir V.; Abramov, Sergey K.; Ponomarenko, Nikolay N.; Uss, Mikhail L.; Zriakhov, Mikhail; Vozel, Benoit; Chehdi, Kacem; Astola, Jaakko T.

    2011-01-01

    In many modern applications, methods and algorithms used for image processing require a priori knowledge or estimates of noise type and its characteristics. Noise type and basic parameters can be sometimes known in advance or determined in an interactive manner. However, it occurs more and more often that they should be estimated in a blind manner. The results of noise-type blind determination can be false, and the estimates of noise parameters are characterized by certain accuracy. Such false decisions and estimation errors have an impact on performance of image-processing techniques that is based on the obtained information. We address some issues of such a negative influence. Possible structures of automatic procedures are presented and discussed for several typical applications of image processing as remote sensing data preprocessing and compression.

  1. A Web-Based Assessment for Phonological Awareness, Rapid Automatized Naming (RAN) and Learning to Read Chinese

    ERIC Educational Resources Information Center

    Liao, Chen-Huei; Kuo, Bor-Chen

    2011-01-01

    The present study examined the equivalency of conventional and web-based tests in reading Chinese. Phonological awareness, rapid automatized naming (RAN), reading accuracy, and reading fluency tests were administered to 93 grade 6 children in Taiwan with both test versions (paper-pencil and web-based). The results suggest that conventional and…

  2. Automatic Concept-Based Query Expansion Using Term Relational Pathways Built from a Collection-Specific Association Thesaurus

    ERIC Educational Resources Information Center

    Lyall-Wilson, Jennifer Rae

    2013-01-01

    The dissertation research explores an approach to automatic concept-based query expansion to improve search engine performance. It uses a network-based approach for identifying the concept represented by the user's query and is founded on the idea that a collection-specific association thesaurus can be used to create a reasonable representation of…

  3. A hybrid model for automatic identification of risk factors for heart disease.

    PubMed

    Yang, Hui; Garibaldi, Jonathan M

    2015-12-01

    Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task. PMID:26375492

  4. Automatic BSS-based filtering of metallic interference in MEG recordings: definition and validation using simulated signals

    NASA Astrophysics Data System (ADS)

    Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio

    2015-08-01

    Objective. One of the principal drawbacks of magnetoencephalography (MEG) is its high sensitivity to metallic artifacts, which come from implanted intracranial electrodes and dental ferromagnetic prosthesis and produce a high distortion that masks cerebral activity. The aim of this study was to develop an automatic algorithm based on blind source separation (BSS) techniques to remove metallic artifacts from MEG signals. Approach. Three methods were evaluated: AMUSE, a second-order technique; and INFOMAX and FastICA, both based on high-order statistics. Simulated signals consisting of real artifact-free data mixed with real metallic artifacts were generated to objectively evaluate the effectiveness of BSS and the subsequent interference reduction. A completely automatic detection of metallic-related components was proposed, exploiting the known characteristics of the metallic interference: regularity and low frequency content. Main results. The automatic procedure was applied to the simulated datasets and the three methods exhibited different performances. Results indicated that AMUSE preserved and consequently recovered more brain activity than INFOMAX and FastICA. Normalized mean squared error for AMUSE decomposition remained below 2%, allowing an effective removal of artifactual components. Significance. To date, the performance of automatic artifact reduction has not been evaluated in MEG recordings. The proposed methodology is based on an automatic algorithm that provides an effective interference removal. This approach can be applied to any MEG dataset affected by metallic artifacts as a processing step, allowing further analysis of unusable or poor quality data.

  5. Automatic two- and three-dimensional mesh generation based on fuzzy knowledge processing

    NASA Astrophysics Data System (ADS)

    Yagawa, G.; Yoshimura, S.; Soneda, N.; Nakao, K.

    1992-09-01

    This paper describes the development of a novel automatic FEM mesh generation algorithm based on the fuzzy knowledge processing technique. A number of local nodal patterns are stored in a nodal pattern database of the mesh generation system. These nodal patterns are determined a priori based on certain theories or past experience of experts of FEM analyses. For example, such human experts can determine certain nodal patterns suitable for stress concentration analyses of cracks, corners, holes and so on. Each nodal pattern possesses a membership function and a procedure of node placement according to this function. In the cases of the nodal patterns for stress concentration regions, the membership function which is utilized in the fuzzy knowledge processing has two meanings, i.e. the “closeness” of nodal location to each stress concentration field as well as “nodal density”. This is attributed to the fact that a denser nodal pattern is required near a stress concentration field. What a user has to do in a practical mesh generation process are to choose several local nodal patterns properly and to designate the maximum nodal density of each pattern. After those simple operations by the user, the system places the chosen nodal patterns automatically in an analysis domain and on its boundary, and connects them smoothly by the fuzzy knowledge processing technique. Then triangular or tetrahedral elements are generated by means of the advancing front method. The key issue of the present algorithm is an easy control of complex two- or three-dimensional nodal density distribution by means of the fuzzy knowledge processing technique. To demonstrate fundamental performances of the present algorithm, a prototype system was constructed with one of object-oriented languages, Smalltalk-80 on a 32-bit microcomputer, Macintosh II. The mesh generation of several two- and three-dimensional domains with cracks, holes and junctions was presented as examples.

  6. Automatic Identification of Web-Based Risk Markers for Health Events

    PubMed Central

    Borsa, Diana; Hayward, Andrew C; McKendry, Rachel A; Cox, Ingemar J

    2015-01-01

    Background The escalating cost of global health care is driving the development of new technologies to identify early indicators of an individual’s risk of disease. Traditionally, epidemiologists have identified such risk factors using medical databases and lengthy clinical studies but these are often limited in size and cost and can fail to take full account of diseases where there are social stigmas or to identify transient acute risk factors. Objective Here we report that Web search engine queries coupled with information on Wikipedia access patterns can be used to infer health events associated with an individual user and automatically generate Web-based risk markers for some of the common medical conditions worldwide, from cardiovascular disease to sexually transmitted infections and mental health conditions, as well as pregnancy. Methods Using anonymized datasets, we present methods to first distinguish individuals likely to have experienced specific health events, and classify them into distinct categories. We then use the self-controlled case series method to find the incidence of health events in risk periods directly following a user’s search for a query category, and compare to the incidence during other periods for the same individuals. Results Searches for pet stores were risk markers for allergy. We also identified some possible new risk markers; for example: searching for fast food and theme restaurants was associated with a transient increase in risk of myocardial infarction, suggesting this exposure goes beyond a long-term risk factor but may also act as an acute trigger of myocardial infarction. Dating and adult content websites were risk markers for sexually transmitted infections, such as human immunodeficiency virus (HIV). Conclusions Web-based methods provide a powerful, low-cost approach to automatically identify risk factors, and support more timely and personalized public health efforts to bring human and economic benefits. PMID

  7. Evaluating the effectiveness of treatment of corneal ulcers via computer-based automatic image analysis

    NASA Astrophysics Data System (ADS)

    Otoum, Nesreen A.; Edirisinghe, Eran A.; Dua, Harminder; Faraj, Lana

    2012-06-01

    Corneal Ulcers are a common eye disease that requires prompt treatment. Recently a number of treatment approaches have been introduced that have been proven to be very effective. Unfortunately, the monitoring process of the treatment procedure remains manual and hence time consuming and prone to human errors. In this research we propose an automatic image analysis based approach to measure the size of an ulcer and its subsequent further investigation to determine the effectiveness of any treatment process followed. In Ophthalmology an ulcer area is detected for further inspection via luminous excitation of a dye. Usually in the imaging systems utilised for this purpose (i.e. a slit lamp with an appropriate dye) the ulcer area is excited to be luminous green in colour as compared to rest of the cornea which appears blue/brown. In the proposed approach we analyse the image in the HVS colour space. Initially a pre-processing stage that carries out a local histogram equalisation is used to bring back detail in any over or under exposed areas. Secondly we deal with the removal of potential reflections from the affected areas by making use of image registration of two candidate corneal images based on the detected corneal areas. Thirdly the exact corneal boundary is detected by initially registering an ellipse to the candidate corneal boundary detected via edge detection and subsequently allowing the user to modify the boundary to overlap with the boundary of the ulcer being observed. Although this step makes the approach semi automatic, it removes the impact of breakages of the corneal boundary due to occlusion, noise, image quality degradations. The ratio between the ulcer area confined within the corneal area to the corneal area is used as a measure of comparison. We demonstrate the use of the proposed tool in the analysis of the effectiveness of a treatment procedure adopted for corneal ulcers in patients by comparing the variation of corneal size over time.

  8. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    NASA Astrophysics Data System (ADS)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  9. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  10. Automatic detecting method of LED signal lamps on fascia based on color image

    NASA Astrophysics Data System (ADS)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  11. Automatic registration of multisensor images with affine deformation based on triangle area representation

    NASA Astrophysics Data System (ADS)

    Li, Bin; Wang, Wei; Ye, Hao

    2013-01-01

    A new automatic feature-based registration algorithm of multisensor images with affine deformation is presented. Although the feature-based registration methods have an advantage in reducing computational load over the area-based ones, certain typical issues such as complex deformation and grayscale discrepancy existing in a multisensor image pair will make the design of a robust feature descriptor challenging. To deal with these issues, in contrast with most existing feature-based methods that describe feature points directly, we introduce an additional procedure of feature quadrilateral construction in the proposed algorithm. Then the descriptors for the constructed feature quadrilaterals are designed based on the affine-invariance property of triangle area representation. By doing these, the proposed algorithm's robustness to both affine deformation and grayscale discrepancy can be guaranteed. Besides, since the calculation of feature descriptors only involves simple algebraic operations, the proposed method has low computational load. Experimental results using real multisensor image pairs are presented to show the merits of the proposed method.

  12. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  13. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  14. An Automatic 3d Reconstruction Method Based on Multi-View Stereo Vision for the Mogao Grottoes

    NASA Astrophysics Data System (ADS)

    Xiong, J.; Zhong, S.; Zheng, L.

    2015-05-01

    This paper presents an automatic three-dimensional reconstruction method based on multi-view stereo vision for the Mogao Grottoes. 3D digitization technique has been used in cultural heritage conservation and replication over the past decade, especially the methods based on binocular stereo vision. However, mismatched points are inevitable in traditional binocular stereo matching due to repeatable or similar features of binocular images. In order to reduce the probability of mismatching greatly and improve the measure precision, a portable four-camera photographic measurement system is used for 3D modelling of a scene. Four cameras of the measurement system form six binocular systems with baselines of different lengths to add extra matching constraints and offer multiple measurements. Matching error based on epipolar constraint is introduced to remove the mismatched points. Finally, an accurate point cloud can be generated by multi-images matching and sub-pixel interpolation. Delaunay triangulation and texture mapping are performed to obtain the 3D model of a scene. The method has been tested on 3D reconstruction several scenes of the Mogao Grottoes and good results verify the effectiveness of the method.

  15. Automatic tuning of liver tissue model using simulated annealing and genetic algorithm heuristic approaches

    NASA Astrophysics Data System (ADS)

    Sulaiman, Salina; Bade, Abdullah; Lee, Rechard; Tanalol, Siti Hasnah

    2014-07-01

    Mass Spring Model (MSM) is a highly efficient model in terms of calculations and easy implementation. Mass, spring stiffness coefficient and damping constant are three major components of MSM. This paper focuses on identifying the coefficients of spring stiffness and damping constant using automated tuning method by optimization in generating human liver model capable of responding quickly. To achieve the objective two heuristic approaches are used, namely Simulated Annealing (SA) and Genetic Algorithm (GA) on the human liver model data set. The properties of the mechanical heart, which are taken into consideration, are anisotropy and viscoelasticity. Optimization results from SA and GA are then implemented into the MSM to model two human hearts, each with its SA or GA construction parameters. These techniques are implemented while making FEM construction parameters as benchmark. Step size response of both models are obtained after MSMs were solved using Fourth Order Runge-Kutta (RK4) to compare the elasticity response of both models. Remodelled time using manual calculation methods was compared against heuristic optimization methods of SA and GA in showing that model with automatic construction is more realistic in terms of realtime interaction response time. Liver models generated using SA and GA optimization techniques are compared with liver model from manual calculation. It shows that the reconstruction time required for 1000 repetitions of SA and GA is faster than the manual method. Meanwhile comparison between construction time of SA and GA model indicates that model SA is faster than GA with varying rates of time 0.110635 seconds/1000 repetitions. Real-time interaction of mechanical properties is dependent on rate of time and speed of remodelling process. Thus, the SA and GA have proven to be suitable in enhancing realism of simulated real-time interaction in liver remodelling.

  16. Automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Espy-Wilson, Carol

    2005-04-01

    Great strides have been made in the development of automatic speech recognition (ASR) technology over the past thirty years. Most of this effort has been centered around the extension and improvement of Hidden Markov Model (HMM) approaches to ASR. Current commercially-available and industry systems based on HMMs can perform well for certain situational tasks that restrict variability such as phone dialing or limited voice commands. However, the holy grail of ASR systems is performance comparable to humans-in other words, the ability to automatically transcribe unrestricted conversational speech spoken by an infinite number of speakers under varying acoustic environments. This goal is far from being reached. Key to the success of ASR is effective modeling of variability in the speech signal. This tutorial will review the basics of ASR and the various ways in which our current knowledge of speech production, speech perception and prosody can be exploited to improve robustness at every level of the system.

  17. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging

  18. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging

  19. Uncertain Training Data Edition for Automatic Object-Based Change Map Extraction

    NASA Astrophysics Data System (ADS)

    Hajahmadi, S.; Mokhtarzadeh, M.; Mohammadzadeh, A.; Valadanzouj, M. J.

    2013-09-01

    Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  20. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types. PMID:26640970

  1. Empirical study on neural network based predictive techniques for automatic number plate recognition

    NASA Astrophysics Data System (ADS)

    Shashidhara, M. S.; Indrakumar, S. S.

    2011-10-01

    The objective of this study is to provide an easy, accurate and effective technology for the Bangalore city traffic control. This is based on the techniques of image processing and laser beam technology. The core concept chosen here is an image processing technology by the method of automatic number plate recognition system. First number plate is recognized if any vehicle breaks the traffic rules in the signals. The number is fetched from the database of the RTO office by the process of automatic database fetching. Next this sends the notice and penalty related information to the vehicle owner email-id and an SMS sent to vehicle owner. In this paper, we use of cameras with zooming options & laser beams to get accurate pictures further applied image processing techniques such as Edge detection to understand the vehicle, Identifying the location of the number plate, Identifying the number plate for further use, Plain plate number, Number plate with additional information, Number plates in the different fonts. Accessing the database of the vehicle registration office to identify the name and address and other information of the vehicle number. The updates to be made to the database for the recording of the violation and penalty issues. A feed forward artificial neural network is used for OCR. This procedure is particularly important for glyphs that are visually similar such as '8' and '9' and results in training sets of between 25,000 and 40,000 training samples. Over training of the neural network is prevented by Bayesian regularization. The neural network output value is set to 0.05 when the input is not desired glyph, and 0.95 for correct input.

  2. High-throughput full-automatic synchrotron-based tomographic microscopy

    SciTech Connect

    Mader, Kevin; Marone, Federica; Hintermüller, Christoph; Mikuljan, Gordan; Isenegger, Andreas; Stampanoni, Marco

    2011-08-16

    At the TOMCAT (TOmographic Microscopy and Coherent rAdiology experimenTs) beamline of the Swiss Light Source with an energy range of 8-45 keV and voxel size from 0.37 {micro}m to 7.4 {micro}m, full tomographic datasets are typically acquired in 5 to 10 min. To exploit the speed of the system and enable high-throughput studies to be performed in a fully automatic manner, a package of automation tools has been developed. The samples are automatically exchanged, aligned, moved to the correct region of interest, and scanned. This task is accomplished through the coordination of Python scripts, a robot-based sample-exchange system, sample positioning motors and a CCD camera. The tools are suited for any samples that can be mounted on a standard SEM stub, and require no specific environmental conditions. Up to 60 samples can be analyzed at a time without user intervention. The throughput of the system is dependent on resolution, energy and sample size, but rates of four samples per hour have been achieved with 0.74 {micro}m voxel size at 17.5 keV. The maximum intervention-free scanning time is theoretically unlimited, and in practice experiments have been running unattended as long as 53 h (the average beam time allocation at TOMCAT is 48 h per user). The system is the first fully automated high-throughput tomography station: mounting samples, finding regions of interest, scanning and reconstructing can be performed without user intervention. The system also includes many features which accelerate and simplify the process of tomographic microscopy.

  3. Mitochondrial complex I and cell death: a semi-automatic shotgun model

    PubMed Central

    Gonzalez-Halphen, D; Ghelli, A; Iommarini, L; Carelli, V; Esposti, M D

    2011-01-01

    Mitochondrial dysfunction often leads to cell death and disease. We can now draw correlations between the dysfunction of one of the most important mitochondrial enzymes, NADH:ubiquinone reductase or complex I, and its structural organization thanks to the recent advances in the X-ray structure of its bacterial homologs. The new structural information on bacterial complex I provide essential clues to finally understand how complex I may work. However, the same information remains difficult to interpret for many scientists working on mitochondrial complex I from different angles, especially in the field of cell death. Here, we present a novel way of interpreting the bacterial structural information in accessible terms. On the basis of the analogy to semi-automatic shotguns, we propose a novel functional model that incorporates recent structural information with previous evidence derived from studies on mitochondrial diseases, as well as functional bioenergetics. PMID:22030538

  4. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    PubMed

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT. PMID:23790462

  5. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  6. Parameter estimation in distributed hydrological catchment modelling using automatic calibration with multiple objectives

    NASA Astrophysics Data System (ADS)

    Madsen, Henrik

    A consistent framework for parameter estimation in distributed hydrological catchment modelling using automatic calibration is formulated. The framework focuses on the different steps in the estimation process from model parameterisation and selection of calibration parameters, formulation of calibration criteria, and choice of optimisation algorithm. The calibration problem is formulated in a general multi-objective context in which different objective functions that measure individual process descriptions can be optimised simultaneously. Within this framework it is possible to tailor the model calibration to the specific objectives of the model application being considered. A test example is presented that illustrates the use of the calibration framework for parameter estimation in the MIKE SHE integrated and distributed hydrological modelling system. A significant trade-off between the performance of the groundwater level simulations and the catchment runoff is observed in this case, defining a Pareto front with a very sharp structure. The Pareto optimum solution corresponding to a proposed balanced aggregated objective function is seen to provide a proper balance between the two objectives. Compared to a manual expert calibration, the balanced Pareto optimum solution provides generally better simulation of the runoff, whereas virtually similar performance is obtained for the groundwater level simulations.

  7. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  8. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time. PMID:26405887

  9. Automatic segmentation of bladder and prostate using coupled 3D deformable models.

    PubMed

    Costa, María Jimena; Delingette, Hervé; Novellas, Sébastien; Ayache, Nicholas

    2007-01-01

    In this paper, we propose a fully automatic method for the coupled 3D localization and segmentation of lower abdomen structures. We apply it to the joint segmentation of the prostate and bladder in a database of CT scans of the lower abdomen of male patients. A flexible approach on the bladder allows the process to easily adapt to high shape variation and to intensity inhomogeneities that would be hard to characterize (due, for example, to the level of contrast agent that is present). On the other hand, a statistical shape prior is enforced on the prostate. We also propose an adaptive non-overlapping constraint that arbitrates the evolution of both structures based on the availability of strong image data at their common boundary. The method has been tested on a database of 16 volumetric images, and the validation process includes an assessment of inter-expert variability in prostate delineation, with promising results. PMID:18051066

  10. Tools for the automatic identification and classification of RNA base pairs

    PubMed Central

    Yang, Huanwang; Jossinet, Fabrice; Leontis, Neocles; Chen, Li; Westbrook, John; Berman, Helen; Westhof, Eric

    2003-01-01

    Three programs have been developed to aid in the classification and visualization of RNA structure. BPViewer provides a web interface for displaying three-dimensional (3D) coordinates of individual base pairs or base pair collections. A web server, RNAview, automatically identifies and classifies the types of base pairs that are formed in nucleic acid structures by various combinations of the three edges, Watson–Crick, Hoogsteen and the Sugar edge. RNAView produces two-dimensional (2D) diagrams of secondary and tertiary structure in either Postscript, VRML or RNAML formats. The application RNAMLview can be used to rearrange various parts of the RNAView 2D diagram to generate a standard representation (like the cloverleaf structure of tRNAs) or any layout desired by the user. A 2D diagram can be rapidly reformatted using RNAMLview since all the parts of RNA (like helices and single strands) are dynamically linked while moving the selected parts. With the base pair annotation and the 2D graphic display, RNA motifs are rapidly identified and classified. A survey has been carried out for 41 unique structures selected from the NDB database. The statistics for the occurrence of each edge and of each of the 12 bp families are given for the combinations of the four bases: A, G, U and C. The program also allows for visualization of the base pair interactions by using a symbolic convention previously proposed for base pairs. The web servers for BPViewer and RNAview are available at http://ndbserver.rutgers.edu/services/. The application RNAMLview can also be downloaded from this site. The 2D diagrams produced by RNAview are available for RNA structures in the Nucleic Acid Database (NDB) at http://ndbserver.rutgers.edu/atlas/. PMID:12824344

  11. An automatic data acquisition system for optical characterization of PEDOT:PSS-based gas sensor

    NASA Astrophysics Data System (ADS)

    Junaidi, Aba, La; Triyana, Kuwat

    2015-04-01

    A measurement system that consists of a pair of laser diode and photodiode coupled with an automatic data acquisition system based on microcontroller of AVR ATMega16 (hereafter to be called DAQ MA-16) has been developed for measuring optical response of polymer-based gas sensor. In this case, the optical response was represented by the voltage output of the photodiode. The polymer-based gas sensor was a thin film of polymer of Poly(3,4-ethylenedioxythiophene): poly(styrenesulfonate) or PEDOT:PSS deposited on a glass substrate. For measurement, the sensor was placed in the chamber, and then the gas ammonia with a fix flow rate was flowed into the chamber. The opposite part of the chamber was installed a pump to throw the gas. The National Instrument Data Acquisition (NI DAQ) BNC-2110 has been used to calibrate the DAQ MA-16 system. From the calibration, it can be estimated that the accuracy of DAQ MA-16 is about 99.4%.

  12. Automatic Single Tree Detection in Plantations using UAV-based Photogrammetric Point clouds

    NASA Astrophysics Data System (ADS)

    Kattenborn, T.; Sperlich, M.; Bataua, K.; Koch, B.

    2014-08-01

    For reasons of documentation, management and certification there is a high interest in efficient inventories of palm plantations on the single plant level. Recent developments in unmanned aerial vehicle (UAV) technology facilitate spatial and temporal flexible acquisition of high resolution 3D data. Common single tree detection approaches are based on Very High Resolution (VHR) satellite or Airborne Laser Scanning (ALS) data. However, VHR data is often limited to clouds and does commonly not allow for height measurements. VHR and in particualar ALS data are characterized by high relatively high acquisition costs. Sperlich et al. (2013) already demonstrated the high potential of UAV-based photogrammetric point clouds for single tree detection using pouring algorithms. This approach was adjusted and improved for an application on palm plantation. The 9.4ha test site on Tarawa, Kiribati, comprised densely scattered growing palms, as well as abundant undergrowth and trees. Using a standard consumer grade camera mounted on an octocopter two flight campaigns at 70m and 100m altitude were performed to evaluate the effect Ground Sampling Distance (GSD) and image overlap. To avoid comission errors and improve the terrain interpolation the point clouds were classified based on the geometric characteristics of the classes, i.e. (1) palm, (2) other vegetation (3) and ground. The mapping accuracy amounts for 86.1 % for the entire study area and 98.2 % for dense growing palm stands. We conclude that this flexible and automatic approach has high capabilities for operational use.

  13. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  14. Text Mining and Natural Language Processing Approaches for Automatic Categorization of Lay Requests to Web-Based Expert Forums

    PubMed Central

    Reincke, Ulrich; Michelmann, Hans Wilhelm

    2009-01-01

    Background Both healthy and sick people increasingly use electronic media to obtain medical information and advice. For example, Internet users may send requests to Web-based expert forums, or so-called “ask the doctor” services. Objective To automatically classify lay requests to an Internet medical expert forum using a combination of different text-mining strategies. Methods We first manually classified a sample of 988 requests directed to a involuntary childlessness forum on the German website “Rund ums Baby” (“Everything about Babies”) into one or more of 38 categories belonging to two dimensions (“subject matter” and “expectations”). After creating start and synonym lists, we calculated the average Cramer’s V statistic for the association of each word with each category. We also used principle component analysis and singular value decomposition as further text-mining strategies. With these measures we trained regression models and determined, on the basis of best regression models, for any request the probability of belonging to each of the 38 different categories, with a cutoff of 50%. Recall and precision of a test sample were calculated as a measure of quality for the automatic classification. Results According to the manual classification of 988 documents, 102 (10%) documents fell into the category “in vitro fertilization (IVF),” 81 (8%) into the category “ovulation,” 79 (8%) into “cycle,” and 57 (6%) into “semen analysis.” These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as “general information” and 351 (36%) as a wish for “treatment recommendations.” The generation of indicator variables based on the chi-square analysis and Cramer’s V proved to be the best approach for automatic classification in about half of the categories. In combination with the two other

  15. TReMAP: Automatic 3D Neuron Reconstruction Based on Tracing, Reverse Mapping and Assembling of 2D Projections.

    PubMed

    Zhou, Zhi; Liu, Xiaoxiao; Long, Brian; Peng, Hanchuan

    2016-01-01

    Efficient and accurate digital reconstruction of neurons from large-scale 3D microscopic images remains a challenge in neuroscience. We propose a new automatic 3D neuron reconstruction algorithm, TReMAP, which utilizes 3D Virtual Finger (a reverse-mapping technique) to detect 3D neuron structures based on tracing results on 2D projection planes. Our fully automatic tracing strategy achieves close performance with the state-of-the-art neuron tracing algorithms, with the crucial advantage of efficient computation (much less memory consumption and parallel computation) for large-scale images. PMID:26306866

  16. An automatic water body area monitoring algorithm for satellite images based on Markov Random Fields

    NASA Astrophysics Data System (ADS)

    Elmi, Omid; Tourian, Mohammad J.; Sneeuw, Nico

    2016-04-01

    Our knowledge about spatial and temporal variation of hydrological parameters are surprisingly poor, because most of it is based on in situ stations and the number of stations have reduced dramatically during the past decades. On the other hand, remote sensing techniques have proven their ability to measure different parameters of Earth phenomena. Optical and SAR satellite imagery provide the opportunity to monitor the spatial change in coastline, which can serve as a way to determine the water extent repeatedly in an appropriate time interval. An appropriate classification technique to separate water and land is the backbone of each automatic water body monitoring. Due to changes in the water level, river and lake extent, atmosphere, sunlight radiation and onboard calibration of the satellite over time, most of the pixel-based classification techniques fail to determine accurate water masks. Beyond pixel intensity, spatial correlation between neighboring pixels is another source of information that should be used to decide the label of pixels. Water bodies have strong spatial correlation in satellite images. Therefore including contextual information as additional constraint into the procedure of water body monitoring improves the accuracy of the derived water masks significantly. In this study, we present an automatic algorithm for water body area monitoring based on maximum a posteriori (MAP) estimation of Markov Random Fields (MRF). First we collect all available images from selected case studies during the monitoring period. Then for each image separately we apply a k-means clustering to derive a primary water mask. After that we develop a MRF using pixel values and the primary water mask for each image. Then among the different realizations of the field we select the one that maximizes the posterior estimation. We solve this optimization problem using graph cut techniques. A graph with two terminals is constructed, after which the best labelling structure for

  17. Informatics in Radiology (infoRAD): radiology report entry with automatic phrase completion driven by language modeling.

    PubMed

    Eng, John; Eisner, Jason M

    2004-01-01

    Keyboard entry or correction of radiology reports by radiologists and transcriptionists remains necessary in many settings despite advances in computerized speech recognition. A report entry system that implements an automated phrase completion feature based on language modeling was developed and tested. The special text editor uses context to predict the full word or phrase being typed, updating the displayed prediction after each keystroke. At any point, pressing the tab key inserts the predicted phrase without having to type the remaining characters of the phrase. Successive words of the phrase are predicted by a trigram language model. Phrase lengths are chosen to minimize the expected number of keystrokes as predicted by the language model. Operation is highly and automatically customized to each user. The language model was trained on 36,843 general radiography reports, which were consecutively generated and contained 1.48 million words. Performance was tested on 200 randomly selected reports outside of the training set. The phrase completion technique reduced the average number of keystrokes per report from 194 to 58; the average reduction factor was 3.3 (geometric mean) (95% confidence interval, 3.2-3.5). The algorithm significantly reduced the number of keystrokes required to generate a radiography report (P <.00005). PMID:15371624

  18. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  19. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  20. Automatic calibration of a global hydrological model using satellite data as a proxy for stream flow data

    NASA Astrophysics Data System (ADS)

    Revilla-Romero, B.; Beck, H.; Salamon, P.; Burek, P.; Thielen, J.; de Roo, A.

    2014-12-01

    Model calibration and validation are commonly restricted due to the limited availability of historical in situ observational data. Several studies have demonstrated that using complementary remotely sensed datasets such as soil moisture for model calibration have led to improvements. The aim of this study was to evaluate the use of remotely sensed signal of the Global Flood Detection System (GFDS) as a proxy for stream flow data to calibrate a global hydrological model used in operational flood forecasting. This is done in different river basins located in Africa, South and North America for the time period 1998-2010 by comparing model calibration using the raw satellite signal as a proxy for river discharge with a model calibration using in situ stream flow observations. River flow is simulated using the LISFLOOD hydrological model for the flow routing in the river network and the groundwater mass balance. The model is set up on global coverage with horizontal grid resolution of 0.1 degree and daily time step for input/output data. Based on prior tests, a set of seven model parameters was used for calibration. The parameter space was defined by specifying lower and upper limits on each parameter. The objective functions considered were Pearson correlation (R), Nash-Sutcliffe Efficiency log (NSlog) and Kling-Gupta Efficiency (KGE') where both single- and multi-objective functions were employed. After multiple iterations, for each catchment, the algorithm generated a set of Pareto-optimal front of solutions. A single parameter set was selected which had the lowest distance to R=1 for the single-objective and NSlog=1 and KGE'=1 for the multi-objective function. The results of the different test river basins are compared against the performance obtained using the same objective functions by in situ discharge observations. Automatic calibration strategies of the global hydrological model using satellite data as a proxy for stream flow data are outlined and discussed.

  1. Automatic registration of large-scale urban scene point clouds based on semantic feature points

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liang, Fuxun; Liu, Yuan

    2016-03-01

    Point clouds collected by terrestrial laser scanning (TLS) from large-scale urban scenes contain a wide variety of objects (buildings, cars, pole-like objects, and others) with symmetric and incomplete structures, and relatively low-textured surfaces, all of which pose great challenges for automatic registration between scans. To address the challenges, this paper proposes a registration method to provide marker-free and multi-view registration based on the semantic feature points extracted. First, the method detects the semantic feature points within a detection scheme, which includes point cloud segmentation, vertical feature lines extraction and semantic information calculation and finally takes the intersections of these lines with the ground as the semantic feature points. Second, the proposed method matches the semantic feature points using geometrical constraints (3-point scheme) as well as semantic information (category and direction), resulting in exhaustive pairwise registration between scans. Finally, the proposed method implements multi-view registration by constructing a minimum spanning tree of the fully connected graph derived from exhaustive pairwise registration. Experiments have demonstrated that the proposed method performs well in various urban environments and indoor scenes with the accuracy at the centimeter level and improves the efficiency, robustness, and accuracy of registration in comparison with the feature plane-based methods.

  2. UFC advisor: An AI-based system for the automatic test environment

    NASA Technical Reports Server (NTRS)

    Lincoln, David T.; Fink, Pamela K.

    1990-01-01

    The Air Logistics Command within the Air Force is responsible for maintaining a wide variety of aircraft fleets and weapon systems. To maintain these fleets and systems requires specialized test equipment that provides data concerning the behavior of a particular device. The test equipment is used to 'poke and prod' the device to determine its functionality. The data represent voltages, pressures, torques, temperatures, etc. and are called testpoints. These testpoints can be defined numerically as being in or out of limits/tolerance. Some test equipment is termed 'automatic' because it is computer-controlled. Due to the fact that effective maintenance in the test arena requires a significant amount of expertise, it is an ideal area for the application of knowledge-based system technology. Such a system would take testpoint data, identify values out-of-limits, and determine potential underlying problems based on what is out-of-limits and how far. This paper discusses the application of this technology to a device called the Unified Fuel Control (UFC) which is maintained in this manner.

  3. A real-time semi-automatic video segmentation system based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Hao; Wu, Chi-Hao; Chen, Jun-Cheng; Kuo, Jin-Hau; Wu, Ja-Ling

    2005-07-01

    Mathematic morphology provides a systematic approach to analyze the geometric characteristics of signals or images, and has been widely applied to many applications such as edge detection, object segmentation, noise suppression. In this paper, a supervised morphology based video segmentation system is proposed. To find where a semantic object resides, the user could click the mouse near the boundary of the object in the first frame of a video to indicate its rough definition, shape and location. The proposed system will automatically segment the first frame by first locating the searching area and then classifying the units in it into object part and non-object part to find out the continuous contour by means of a multi-valued watershed algorithm using a hierarchical queue. An adaptive morphological operator based on edge strength, which is computed by a multi-scale morphological gradient algorithm, is proposed to lower the error of user assistance such that the searching area is created correctly. Once extended to video object segmentation, a fast video tracking technique is applied. Under the assumption of small motion, the object can be segmented in real-time. Moreover, an accuracy evaluation mechanism is proposed to ensure the robustness of the segmentation.

  4. Automatic FDG-PET-based tumor and metastatic lymph node segmentation in cervical cancer

    NASA Astrophysics Data System (ADS)

    Arbonès, Dídac R.; Jensen, Henrik G.; Loft, Annika; Munck af Rosenschöld, Per; Hansen, Anders Elias; Igel, Christian; Darkner, Sune

    2014-03-01

    Treatment of cervical cancer, one of the three most commonly diagnosed cancers worldwide, often relies on delineations of the tumour and metastases based on PET imaging using the contrast agent 18F-Fluorodeoxyglucose (FDG). We present a robust automatic algorithm for segmenting the gross tumour volume (GTV) and metastatic lymph nodes in such images. As the cervix is located next to the bladder and FDG is washed out through the urine, the PET-positive GTV and the bladder cannot be easily separated. Our processing pipeline starts with a histogram-based region of interest detection followed by level set segmentation. After that, morphological image operations combined with clustering, region growing, and nearest neighbour labelling allow to remove the bladder and to identify the tumour and metastatic lymph nodes. The proposed method was applied to 125 patients and no failure could be detected by visual inspection. We compared our segmentations with results from manual delineations of corresponding MR and CT images, showing that the detected GTV lays at least 97.5% within the MR/CT delineations. We conclude that the algorithm has a very high potential for substituting the tedious manual delineation of PET positive areas.

  5. Automatic classification of delphinids based on the representative frequencies of whistles.

    PubMed

    Lin, Tzu-Hao; Chou, Lien-Siang

    2015-08-01

    Classification of odontocete species remains a challenging task for passive acoustic monitoring. Classifiers that have been developed use spectral features extracted from echolocation clicks and whistle contours. Most of these contour-based classifiers require complete contours to reduce measurement errors. Therefore, overlapping contours and partially detected contours in an automatic detection algorithm may increase the bias for contour-based classifiers. In this study, classification was conducted on each recording section without extracting individual contours. The local-max detector was used to extract representative frequencies of delphinid whistles and each section was divided into multiple non-overlapping fragments. Three acoustical parameters were measured from the distribution of representative frequencies in each fragment. By using the statistical features of the acoustical parameters and the percentage of overlapping whistles, correct classification rate of 70.3% was reached for the recordings of seven species (Tursiops truncatus, Delphinus delphis, Delphinus capensis, Peponocephala electra, Grampus griseus, Stenella longirostris longirostris, and Stenella attenuata) archived in MobySound.org. In addition, correct classification rate was not dramatically reduced in various simulated noise conditions. This algorithm can be employed in acoustic observatories to classify different delphinid species and facilitate future studies on the community ecology of odontocetes. PMID:26328716

  6. [Automatic classification method of star spectra data based on manifold fuzzy twin support vector machine].

    PubMed

    Liu, Zhong-bao; Gao, Yan-yun; Wang, Jian-zhen

    2015-01-01

    Support vector machine (SVM) with good leaning ability and generalization is widely used in the star spectra data classification. But when the scale of data becomes larger, the shortages of SVM appear: the calculation amount is quite large and the classification speed is too slow. In order to solve the above problems, twin support vector machine (TWSVM) was proposed by Jayadeva. The advantage of TSVM is that the time cost is reduced to 1/4 of that of SVM. While all the methods mentioned above only focus on the global characteristics and neglect the local characteristics. In view of this, an automatic classification method of star spectra data based on manifold fuzzy twin support vector machine (MF-TSVM) is proposed in this paper. In MF-TSVM, manifold-based discriminant analysis (MDA) is used to obtain the global and local characteristics of the input data and the fuzzy membership is introduced to reduce the influences of noise and singular data on the classification results. Comparative experiments with current classification methods, such as C-SVM and KNN, on the SDSS star spectra datasets verify the effectiveness of the proposed method. PMID:25993861

  7. Comparison of dictionary-based approaches to automatic repeating melody extraction

    NASA Astrophysics Data System (ADS)

    Shih, Hsuan-Huei; Narayanan, Shrikanth S.; Kuo, C.-C. Jay

    2001-12-01

    Automatic melody extraction techniques can be used to index and retrieve songs in music databases. Here, we consider a piece of music consisting of numerical music scores (e.g. the MIDI file format) as the input. Segmentation is done based on the tempo information, and a music score is decomposed into bars. Each bar is indexed, and a bar index table is built accordingly. Two approaches were proposed to find repeating patterns by the authors recently. In the first approach, an adaptive dictionary-based algorithm known as the Lempel Ziv 78 (LZ-78) was modified and applied to melody extraction, which is called the modified LZ78 algorithm or MLZ78. In the second approach, a sliding window is applied to generate the pattern dictionary. It is called the Exhaustive Search with Progressive LEngth algorithm or ESPLE. Dictionaries generated from both approaches need to be pruned to remove non-repeating patterns. Each iteration of either MLZ78 or ESPLE is followed by pruning of updated dictionaries generated from the previous cycle until the dictionaries converge. Experiments are performed on MIDI files to evaluate the performance of the proposed algorithms. In this research, we compare results obtained from these two systems in terms of complexity, performance accuracy and efficiency. Their relative merits and shortcomings are discussed in detail.

  8. Automatic identification of the number of food items in a meal using clustering techniques based on the monitoring of swallowing and chewing

    PubMed Central

    Lopez-Meyer, Paulo; Schuckers, Stephanie; Makeyev, Oleksandr; Fontana, Juan M.; Sazonov, Edward

    2012-01-01

    The number of distinct foods consumed in a meal is of significant clinical concern in the study of obesity and other eating disorders. This paper proposes the use of information contained in chewing and swallowing sequences for meal segmentation by food types. Data collected from experiments of 17 volunteers were analyzed using two different clustering techniques. First, an unsupervised clustering technique, Affinity Propagation (AP), was used to automatically identify the number of segments within a meal. Second, performance of the unsupervised AP method was compared to a supervised learning approach based on Agglomerative Hierarchical Clustering (AHC). While the AP method was able to obtain 90% accuracy in predicting the number of food items, the AHC achieved an accuracy >95%. Experimental results suggest that the proposed models of automatic meal segmentation may be utilized as part of an integral application for objective Monitoring of Ingestive Behavior in free living conditions. PMID:23125872

  9. Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach.

    PubMed

    Liu, Guozhong; Yousefi, Siavash; Zhi, Zhongwei; Wang, Ruikang K

    2011-09-12

    This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery. PMID:21935179

  10. Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach

    PubMed Central

    Liu, Guozhong; Yousefi, Siavash; Zhi, Zhongwei; Wang, Ruikang K.

    2011-01-01

    This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery. PMID:21935179

  11. Multifractal Analysis and Relevance Vector Machine-Based Automatic Seizure Detection in Intracranial EEG.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Shasha

    2015-09-01

    Automatic seizure detection technology is of great significance for long-term electroencephalogram (EEG) monitoring of epilepsy patients. The aim of this work is to develop a seizure detection system with high accuracy. The proposed system was mainly based on multifractal analysis, which describes the local singular behavior of fractal objects and characterizes the multifractal structure using a continuous spectrum. Compared with computing the single fractal dimension, multifractal analysis can provide a better description on the transient behavior of EEG fractal time series during the evolvement from interictal stage to seizures. Thus both interictal EEG and ictal EEG were analyzed by multifractal formalism and their differences in the multifractal features were used to distinguish the two class of EEG and detect seizures. In the proposed detection system, eight features (α0, α(min), α(max), Δα, f(α(min)), f(α(max)), Δf and R) were extracted from the multifractal spectrums of the preprocessed EEG to construct feature vectors. Subsequently, relevance vector machine (RVM) was applied for EEG patterns classification, and a series of post-processing operations were used to increase the accuracy and reduce false detections. Both epoch-based and event-based evaluation methods were performed to appraise the system's performance on the EEG recordings of 21 patients in the Freiburg database. The epoch-based sensitivity of 92.94% and specificity of 97.47% were achieved, and the proposed system obtained a sensitivity of 92.06% with a false detection rate of 0.34/h in event-based performance assessment. PMID:25986754

  12. Automatic differentiation for gradient-based optimization of radiatively heated microelectronics manufacturing equipment

    SciTech Connect

    Moen, C.D.; Spence, P.A.; Meza, J.C.; Plantenga, T.D.

    1996-12-31

    Automatic differentiation is applied to the optimal design of microelectronic manufacturing equipment. The performance of nonlinear, least-squares optimization methods is compared between numerical and analytical gradient approaches. The optimization calculations are performed by running large finite-element codes in an object-oriented optimization environment. The Adifor automatic differentiation tool is used to generate analytic derivatives for the finite-element codes. The performance results support previous observations that automatic differentiation becomes beneficial as the number of optimization parameters increases. The increase in speed, relative to numerical differences, has a limited value and results are reported for two different analysis codes.

  13. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    NASA Astrophysics Data System (ADS)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  14. Experiments with Uas Imagery for Automatic Modeling of Power Line 3d Geometry

    NASA Astrophysics Data System (ADS)

    Jóźków, G.; Vander Jagt, B.; Toth, C.

    2015-08-01

    The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry), and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse) point cloud.

  15. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica

    2013-05-01

    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.

  16. A computer program to automatically generate state equations and macro-models. [for network analysis and design

    NASA Technical Reports Server (NTRS)

    Garrett, S. J.; Bowers, J. C.; Oreilly, J. E., Jr.

    1978-01-01

    A computer program, PROSE, that produces nonlinear state equations from a simple topological description of an electrical or mechanical network is described. Unnecessary states are also automatically eliminated, so that a simplified terminal circuit model is obtained. The program also prints out the eigenvalues of a linearized system and the sensitivities of the eigenvalue of largest magnitude.

  17. A framework for the automatic generation of surface topologies for abdominal aortic aneurysm models.

    PubMed

    Shum, Judy; Xu, Amber; Chatnuntawech, Itthi; Finol, Ender A

    2011-01-01

    Patient-specific abdominal aortic aneurysms (AAAs) are characterized by local curvature changes, which we assess using a feature-based approach on topologies representative of the AAA outer wall surface. The application of image segmentation methods yields 3D reconstructed surface polygons that contain low-quality elements, unrealistic sharp corners, and surface irregularities. To optimize the quality of the surface topology, an iterative algorithm was developed to perform interpolation of the AAA geometry, topology refinement, and smoothing. Triangular surface topologies are generated based on a Delaunay triangulation algorithm, which is adapted for AAA segmented masks. The boundary of the AAA wall is represented using a signed distance function prior to triangulation. The irregularities on the surface are minimized by an interpolation scheme and the initial coarse triangulation is refined by forcing nodes into equilibrium positions. A surface smoothing algorithm based on a low-pass filter is applied to remove sharp corners. The optimal number of iterations needed for polygon refinement and smoothing is determined by imposing a minimum average element quality index with no significant AAA sac volume change. This framework automatically generates high-quality triangular surface topologies that can be used to characterize local curvature changes of the AAA wall. PMID:20853025

  18. Piloted Simulation Evaluation of a Model-Predictive Automatic Recovery System to Prevent Vehicle Loss of Control on Approach

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Liu, Yuan; Sowers, T. Shane; Owen, A. Karl; Guo, Ten-Huei

    2014-01-01

    This paper describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  19. Sediment characterization in intertidal zone of the Bourgneuf bay using the Automatic Modified Gaussian Model (AMGM)

    NASA Astrophysics Data System (ADS)

    Verpoorter, C.; Carrère, V.; Combe, J.-P.; Le Corre, L.

    2009-04-01

    Understanding of the uppermost layer of cohesive sediment beds provides important clues for predicting future sediment behaviours. Sediment consolidation, grain size, water content and biological slimes (EPS: extracellular polymeric substances) were found to be significant factors influencing erosion resistance. The surface spectral signatures of mudflat sediments reflect such bio-geophysical parameters. The overall shape of the spectrum, also called a continuum, is a function of grain size and moisture content. Composition translates into specific absorption features. Finally, the chlorophyll-a concentration derived from the strength of the absorption at 675 nm, is a good proxy for biofilm biomass. Bourgneuf Bay site, south of the Loire river estuary, France, was chosen to represent a range of physical and biological influences on sediment erodability. Field spectral measurements and samples of sediments were collected during various field campaigns. An ASD Fieldspec 3 spectroradiometer was used to produce sediment reflectance hyperspectra in the wavelength range 350-2500 nm. We have developed an automatic procedure based on the Modified Gaussian Model that uses, as the first step, the Spectroscopic Derivative Analysis (SDA) to extract from spectra the bio-geophysical properties on mudflat sediments (Verpoorter et al., 2007). This AMGM algorithm is a powerfull tool to deconvolve spectra into two components, first gaussian curves for the absorptions bands, and second a straight line in the wavenumber range for the continuum. We are investigating the possibility of including other approaches, as the inverse gaussian band centred on 2800 nm initially developed by Whiting et al., (2006) to estimate water content. Additionally, soils samples were analysed to determine moisture content, grain size (laser grain size analyses), organic matter content, carbonate content (calcimetry) and clay content. X-ray diffraction analysis was performed on selected non

  20. Automatic crack propagation tracking

    NASA Technical Reports Server (NTRS)

    Shephard, M. S.; Weidner, T. J.; Yehia, N. A. B.; Burd, G. S.

    1985-01-01

    A finite element based approach to fully automatic crack propagation tracking is presented. The procedure presented combines fully automatic mesh generation with linear fracture mechanics techniques in a geometrically based finite element code capable of automatically tracking cracks in two-dimensional domains. The automatic mesh generator employs the modified-quadtree technique. Crack propagation increment and direction are predicted using a modified maximum dilatational strain energy density criterion employing the numerical results obtained by meshes of quadratic displacement and singular crack tip